【TED ED 全英文文本】P1-P10合集
?P1? ??What happened when we all stopped
? Today we have something a little different. Dr. Jane Goodall is going to tell you a story. Stay tuned after the animation to learn how to download this as a free children's book. Ready? Let's begin.?
? ?? It starts as a whisper, a word on the air. It can't quite be heard, but you know that it's there. As gentle as sunlight, as tenacious as hail, in its route to the heart, it could not but prevail. And the people looked up from their day-to-day tasks, their day-to-day jobs and their day-to-day masks. They heard or they felt where the whisper could lead, and they looked with eyes wide at what that might mean. And once they could see it, they hadn't a chance To resist the sweet song of the deep spell it cast. But the feeling it brought them at first glance was pain, as they lifted their eyes on the land they had claimed. Since they saw at last as if raised from a dream, they were almost alone on the land and the sea. For the trees had almost gone, and the bees had almost gone, and the creatures in their shells by the seas had almost gone. And the people felt sad as they saw their new Earth, but they knew this was it, one wild chance for rebirth. Breaking new ground, seeds rolling down, smell of the earth on your hands and your brow. No time for sorrow, we're building tomorrow. The sound of things growing now keeps us around. As the wildness grows, and the deep wood grows, and the sense that the future's come to meet you grows, There's no chance we can rest. We must do our best. This moment can lead us back home, that's our test. It starts as a whisper, a word on the air. It can't quite be heard, but you know that it's there. It then spoke like thunder. Until we all moved. And we could. And we did. And it's done. She's renewed.?
? ? ??Help turn the whisper into a roar by sharing this poem today. You can download the illustrated book for free at ed.ted.com/whisper or keep your soul aflutter with one of these animated poems.

P2? ?5 tips to improve your critical thinking?
? ? ?Every day, a sea of decisions stretches before us. Some are small and unimportant, but others have a larger impact on our lives. For example, which politician should I vote for? Should I try the latest diet craze? Or will email make me a millionaire? We're bombarded with so many decisions that it's impossible to make a perfect choice every time. But there are many ways to improve our chances, and one particularly effective technique is critical thinking. This is a way of approaching a question that allows us to carefully deconstruct a situation, reveal its hidden issues, such as bias and manipulation, and make the best decision. If the critical part sounds negative that's because in a way it is. Rather than choosing an answer because it feels right, a person who uses critical thinking subjects all available options to scrutiny and skepticism. Using the tools at their disposal, they'll eliminate everything but the most useful and reliable information. There are many different ways of approaching critical thinking, but here's one five-step process that may help you solve any number of problems.
? ? One: formulate your question. In other words, know what you're looking for. This isn't always as straightforward as it sounds. For example, if you're deciding whether to try out the newest diet craze, your reasons for doing so may be obscured by other factors, like claims that you'll see results in just two weeks. But if you approach the situation with a clear view of what you're actually trying to accomplish by dieting, whether that's weight loss, Better nutrition, or having more energy, that'll equip you to sift through this information critically, find what you're looking for, and decide whether the new fad really suits your needs. Two: gather your information. There's lots of it out there, so having a clear idea of your question will help you determine what's relevant. If you're trying to decide on a diet to improve your nutrition, you may ask an expert for their advice, or seek other people's testimonies. Information gathering helps you weigh different options, moving you closer to a decision that meets your goal. Three: apply the information, something you do by asking critical questions. Facing a decision, ask yourself, "What concepts are at work?" "What assumptions exist?" "Is my interpretation of the information logically sound?" For example, in an email that promises you millions, you should consider, "What is shaping my approach to this situation?" "Do I assume the sender is telling the truth?" "Based on the evidence, is it logical to assume I'll win any money?" Four: consider the implications. Imagine it's election time, and you've selected a political candidate based on their promise to make it cheaper for drivers to fill up on gas. At first glance, that seems great. But what about the long-term environmental effects? If gasoline use is less restricted by cost, this could also cause a huge surge in air pollution, an unintended consequence that's important to think about. Five: explore other points of view. Ask yourself why so many people are drawn to the policies of the opposing political candidate. Even if you disagree with everything that candidate says, exploring the full spectrum of viewpoints might explain why some policies that don't seem valid to you appeal to others. This will allow you to explore alternatives, evaluate your own choices, and ultimately help you make more informed decisions.?
? ? ?This five-step process is just one tool, and it certainly won't eradicate difficult decisions from our lives. But it can help us increase the number of positive choices we make. Critical thinking can give us the tools to sift through a sea of information and find what we're looking for. And if enough of us use it, it has the power to make the world a more reasonable place.

P3? ?3 tips to boost your confidence
? ? ???When faced with a big challenge where potential failure seems to lurk at every corner, maybe you've heard this advice before: "Be more confident." And most likely, this is what you think when you hear it: "If only it were that simple." But what is confidence? Take the belief that you are valuable, worthwhile, and capable, also known as self-esteem, add in the optimism that comes when you are certain of your abilities, and then empowered by these, act courageously to face a challenge head-on. This is confidence. It turns thoughts into action.?
? ? ? ? So where does confidence even come from? There are several factors that impact confidence. One: what you're born with, such as your genes, which will impact things like the balance of neurochemicals in your brain. Two: how you're treated. This includes the social pressures of your environment. And three: the part you have control over, the choices you make, the risks you take, and how you think about and respond to challenges and setbacks. It isn't possible to completely untangle these three factors, but the personal choices we make certainly play a major role in confidence development. So, by keeping in mind a few practical tips, we do actually have the power to cultivate our own confidence. Tip 1: a quick fix. There are a few tricks that can give you an immediate confidence boost in the short term. Picture your success when you're beginning a difficult task, something as simple as listening to music with deep bass; it can promote feelings of power. You can even strike a powerful pose or give yourself a pep talk. Tip two: believe in your ability to improve. If you're looking for a long-term change, consider the way you think about your abilities and talents. Do you think they are fixed at birth, or that they can be developed, like a muscle? These beliefs matter because they can influence how you act when you're faced with setbacks. If you have a fixed mindset, meaning that you think your talents are locked in place, you might give up, assuming you've discovered something you're not very good at. But if you have a growth mindset and think your abilities can improve, a challenge is an opportunity to learn and grow. Neuroscience supports the growth mindset. The connections in your brain do get stronger and grow with study and practice. It also turns out, on average, people who have a growth mindset are more successful, getting Better grades, and doing Better in the face of challenges. Tip three: practice failure. Face it, you're going to fail sometimes. Everyone does. J.K. Rowling was rejected by twelve different publishers before one picked up "Harry Potter." The Wright Brothers built on history's failed attempts at flight, including some of their own, before designing a successful airplane.?
? ? ?Studies show that those who fail regularly and keep trying anyway are Better equipped to respond to challenges and setbacks in a constructive way. They learn how to try different strategies, ask others for advice, and perservere. So, think of a challenge you want to take on, realize it's not going to be easy, accept that you'll make mistakes, and be kind to yourself when you do. Give yourself a pep talk, stand up, and go for it. The excitement you'll feel knowing that whatever the result, you'll have gained greater knowledge and understanding. This is confidence.

P4? ??A brief history of alcohol
? ? ?This chimpanzee stumbles across a windfall of overripe plums. Many of them have split open, drawing him to their intoxicating fruity odor. He gorges himself and begins to experience some… strange effects. This unwitting ape has stumbled on a process that humans will eventually harness to create beer, wine, and other alcoholic drinks. The sugars in overripe fruit attract microscopic organisms known as yeasts. As the yeasts feed on the fruit sugars they produce a compound called ethanol— the type of alcohol in alcoholic beverages. This process is called fermentation.?
? ? ?Nobody knows exactly when humans began to create fermented beverages. The earliest known evidence comes from 7,000 BCE in China, where residue in clay pots has revealed that people were making an alcoholic beverage from fermented rice, millet, grapes, and honey. Within a few thousand years, cultures all over the world were fermenting their own drinks. Ancient Mesopotamians and Egyptians made beer throughout the year from stored cereal grains. This beer was available to all social classes, and workers even received it in their daily rations. They also made wine, but because the climate wasn’t ideal for growing grapes, it was a rare and expensive delicacy. By contrast, in Greece and Rome, where grapes grew more easily, wine was as readily available as beer was in Egypt and Mesopotamia. Because yeasts will ferment basically any plant sugars, ancient peoples made alcohol from whatever crops and plants grew where they lived. In South America, people made chicha from grains, sometimes adding hallucinogenic herbs. In what’s now Mexico, pulque, made from cactus sap, was the drink of choice, while East Africans made banana and palm beer. And in the area that’s now Japan, people made sake from rice. Almost every region of the globe had its own fermented drinks. As alcohol consumption became part of everyday life, some authorities latched onto effects they perceived as positive— Greek physicians considered wine to be good for health, and poets testified to its creative qualities. Others were more concerned about alcohol’s potential for abuse. Greek philosophers promoted temperance. Early Jewish and Christian writers in Europe integrated wine into rituals but considered excessive intoxication a sin. And in the middle east, Africa, and Spain, an Islamic rule against praying while drunk gradually solidified into a general ban on alcohol. Ancient fermented beverages had relatively low alcohol content. At about 13% alcohol, the by-products wild yeasts generate during fermentation become toxic and kill them. When the yeasts die, fermentation stops and the alcohol content levels off. So for thousands of years, alcohol content was limited.?
? ? ? That changed with the invention of a process called distillation. 9th century Arabic writings describe boiling fermented liquids to vaporize the alcohol in them. Alcohol boils at a lower temperature than water, so it vaporizes first. Capture this vapor, cool it down, and what’s left is liquid alcohol much more concentrated than any fermented beverage. At first, these stronger spirits were used for medicinal purposes. Then, spirits became an important trade commodity because, unlike beer and wine, they didn’t spoil. Rum made from sugar harvested in European colonies in the Caribbean became a staple for sailors and was traded to North America. Europeans brought brandy and gin to Africa and traded it for enslaved people, land, and goods like palm oil and rubber. Spirits became a form of money in these regions. During the Age of Exploration, spirits played a crucial role in long distance sea voyages. Sailing from Europe to east Asia and the Americas could take months, and keeping water fresh for the crews was a challenge. Adding a bucket of brandy to a water barrel kept water fresh longer because alcohol is a preservative that kills harmful microbes. So by the 1600s, alcohol had gone from simply giving animals a buzz to fueling global trade and exploration— along with all their consequences. As time went on, its role in human society would only get more complicated.

P5? ?A brief history of cannibalism
15th century Europeans believed they had hit upon a miracle cure: a remedy for epilepsy, hemorrhage, bruising, nausea, and virtually any other medical ailment. This brown powder could be mixed into drinks, made into salves or eaten straight up. It was known as mumia and made by grinding up mummified human flesh. The word "cannibal" dates from the time of Christopher Columbus; in fact, Columbus may even have coined it himself. After coming ashore on the island of Guadaloupe, Columbus' initial reports back to the Queen of Spain described the indigenous people as friendly and peaceful— though he did mention rumors of a group called the Caribs, who made violent raids and then cooked and ate their prisoners. In response, Queen Isabella granted permission to capture and enslave anyone who ate human flesh. When the island failed to produce the gold Columbus was looking for, he began to label anyone who resisted his plundering and kidnapping as a Caribe. Somewhere along the way, the word "Carib" became "Canibe" and then "Cannibal." First used by colonizers to dehumanize indigenous people, it has since been applied to anyone who eats human flesh. So the term comes from an account that wasn't based on hard evidence, but cannibalism does have a real and much more complex history. It has taken diverse forms— sometimes, as with mumia, it doesn't involved recognizable parts of the human body. The reasons for cannibalistic practices have varied, too. Across cultures and time periods, there's evidence of survival cannibalism, when people living through a famine, siege or ill-fated expedition had to either eat the bodies of the dead or starve to death themselves. But it's also been quite common for cultures to normalize some form of eating human flesh under ordinary circumstances. Because of false accounts like Columbus's, it's difficult to say exactly how common cultural cannibalism has been— but there are still some examples of accepted cannibalistic practices from within the cultures practicing them. Take the medicinal cannibalism in Europe during Columbus's time. Starting in the 15th century, the demand for mumia increased. At first, stolen mummies from Egypt supplied the mumia craze, but soon the demand was too great to be sustained on Egyptian mummies alone, and opportunists stole bodies from European cemeteries to turn into mumia. Use of mumia continued for hundreds of years. It was listed in the Merck index, a popular medical encyclopedia, into the 20th century. And ground up mummies were far from the only remedy made from human flesh that was common throughout Europe. Blood, in either liquid or powdered form, was used to treat epilepsy, while human liver, gall stones, oil distilled from human brains, and pulverized hearts were popular medical concoctions. In China, the written record of socially accepted cannibalism goes back almost 2,000 years. One particularly common form of cannibalism appears to have been filial cannibalism, where adult sons and daughters would offer a piece of their own flesh to their parents. This was typically offered as a last-ditch attempt to cure a sick parent, and wasn't fatal to their offspring— it usually involved flesh from the thigh or, less often, a finger. Cannibalistic funerary rites are another form of culturally sanctioned cannibalism. Perhaps the best-known example came from the Fore people of New Guinea. Through the mid-20th century, members of the community would, if possible, make their funerary preferences known in advance, sometimes requesting that family members gather to consume the body after death. Tragically, though these rituals honored the deceased, they also spread a deadly disease known as kuru through the community. Between the fictionalized stories, verifiable practices, and big gaps that still exist in our knowledge, there's no one history of cannibalism. But we do know that people have been eating each other, volunteering themselves to be eaten, and accusing others of eating people for millennia.

? ? ? The attacking infantry advances steadily, their elephants already having broken the defensive line. The king tries to retreat, but enemy cavalry flanks him from the rear. Escape is impossible. But this isn’t a real war– nor is it just a game. Over the roughly one-and-a-half millennia of its existence, chess has been known as a tool of military strategy, a metaphor for human affairs, and a benchmark of genius. While our earliest records of chess are in the 7th century, legend tells that the game’s origins lie a century earlier. Supposedly, when the youngest prince of the Gupta Empire was killed in battle, his brother devised a way of representing the scene to their grieving mother. Set on the 8x8 ashtapada board used for other popular pastimes, a new game emerged with two key features: different rules for moving different types of pieces, and a single king piece whose fate determined the outcome. The game was originally known as chaturanga– a Sanskrit word for "four divisions." But with its spread to Sassanid Persia, it acquired its current name and terminology– "chess," derived from "shah," meaning king, and “checkmate” from "shah mat," or “the king is helpless.” After the 7th century Islamic conquest of Persia, chess was introduced to the Arab world. Transcending its role as a tactical simulation, it eventually became a rich source of poetic imagery. Diplomats and courtiers used chess terms to describe political power. Ruling caliphs became avid players themselves.?
? ? ? And historian al-Mas’udi considered the game a testament to human free will compared to games of chance. Medieval trade along the Silk Road carried the game to East and Southeast Asia, where many local variants developed. In China, chess pieces were placed at intersections of board squares rather than inside them, as in the native strategy game Go. The reign of Mongol leader Tamerlane saw an 11x10 board with safe squares called citadels. And in Japanese shogi, captured pieces could be used by the opposing player. But it was in Europe that chess began to take on its modern form. By 1000 AD, the game had become part of courtly education. Chess was used as an allegory for different social classes performing their proper roles, and the pieces were re-interpreted in their new context. At the same time, the Church remained suspicious of games. Moralists cautioned against devoting too much time to them, with chess even being briefly banned in France. Yet the game proliferated, and the 15th century saw it cohering into the form we know today. The relatively weak piece of advisor was recast as the more powerful queen– perhaps inspired by the recent surge of strong female leaders. This change accelerated the game’s pace, and as other rules were popularized, treatises analyzing common openings and endgames appeared. Chess theory was born.?
? ? ? With the Enlightenment era, the game moved from royal courts to coffeehouses. Chess was now seen as an expression of creativity, encouraging bold moves and dramatic plays. This "Romantic" style reached its peak in the Immortal Game of 1851, where Adolf Anderssen managed a checkmate after sacrificing his queen and both rooks. But the emergence of formal competitive play in the late 19th century meant that strategic calculation would eventually trump dramatic flair. And with the rise of international competition, chess took on a new geopolitical importance. During the Cold War, the Soviet Union devoted great resources to cultivating chess talent, dominating the championships for the rest of the century. But the player who would truly upset Russian dominance was not a citizen of another country but an IBM computer called Deep Blue. Chess-playing computers had been developed for decades, but Deep Blue’s triumph over Garry Kasparov in 1997 was the first time a machine had defeated a sitting champion. Today, chess software is capable of consistently defeating the best human players. But just like the game they’ve mastered, these machines are products of human ingenuity. And perhaps that same ingenuity will guide us out of this apparent checkmate.

P7??A brie(f) history of cheese
? ? ? Before empires and royalty, before pottery and writing, before metal tools and weapons – there was cheese. As early as 8000 BCE, the earliest Neolithic farmers living in the Fertile Crescent began a legacy of cheesemaking almost as old as civilization itself. The rise of agriculture led to domesticated sheep and goats, which ancient farmers harvested for milk. But when left in warm conditions for several hours, that fresh milk began to sour. Its lactic acids caused proteins to coagulate, binding into soft clumps. Upon discovering this strange transformation, the farmers drained the remaining liquid – later named whey – and found the yellowish globs could be eaten fresh as a soft, spreadable meal. These clumps, or curds, became the building blocks of cheese, which would eventually be aged, pressed, ripened, and whizzed into a diverse cornucopia of dairy delights. The discovery of cheese gave Neolithic people an enormous survival advantage. Milk was rich with essential proteins, fats, and minerals. But it also contained high quantities of lactose – a sugar which is difficult to process for many ancient and modern stomachs.?
? ? ?Cheese, however, could provide all of milk’s advantages with much less lactose. And since it could be preserved and stockpiled, these essential nutrients could be eaten throughout scarce famines and long winters. Some 7th millennium BCE pottery fragments found in Turkey still contain telltale residues of the cheese and butter they held. By the end of the Bronze Age, cheese was a standard commodity in maritime trade throughout the eastern Mediterranean. In the densely populated city-states of Mesopotamia, cheese became a staple of culinary and religious life. Some of the earliest known writing includes administrative records of cheese quotas, listing a variety of cheeses for different rituals and populations across Mesopotamia. Records from nearby civilizations in Turkey also reference rennet. This animal byproduct, produced in the stomachs of certain mammals, can accelerate and control coagulation. Eventually this sophisticated cheesemaking tool spread around the globe, giving way to a wide variety of new, harder cheeses. And though some conservative food cultures rejected the dairy delicacy, many more embraced cheese, and quickly added their own local flavors. Nomadic Mongolians used yaks’ milk to create hard, sundried wedges of Byaslag. Egyptians enjoyed goats’ milk cottage cheese, straining the whey with reed mats. In South Asia, milk was coagulated with a variety of food acids, such as lemon juice, vinegar, or yogurt and then hung to dry into loafs of paneer. This soft mild cheese could be added to curries and sauces, or simply fried as a quick vegetarian dish. The Greeks produced bricks of salty brined feta cheese, alongside a harder variety similar to today’s pecorino romano. This grating cheese was produced in Sicily and used in dishes all across the Mediterranean. Under Roman rule, “dry cheese” or “caseus aridus,” became an essential ration for the nearly 500,000 soldiers guarding the vast borders of the Roman Empire. And when the Western Roman Empire collapsed, cheesemaking continued to evolve in the manors that dotted the medieval European countryside. In the hundreds of Benedictine monasteries scattered across Europe, medieval monks experimented endlessly with different types of milk, cheesemaking practices, and aging processes that led to many of today’s popular cheeses. Parmesan, Roquefort, Munster and several Swiss types were all refined and perfected by these cheesemaking clergymen. In the Alps, Swiss cheesemaking was particularly successful – producing a myriad of cow’s milk cheeses. By the end of the 14th century, Alpine cheese from the Gruyere region of Switzerland had become so profitable that a neighboring state invaded the Gruyere highlands to take control of the growing cheese trade. Cheese remained popular through the Renaissance, and the Industrial revolution took production out of the monastery and into machinery. Today, the world produces roughly 22 billion kilograms of cheese a year, shipped and consumed around the globe. But 10,000 years after its invention, local farms are still following in the footsteps of their Neolithic ancestors, hand crafting one of humanity’s oldest and favorite foods.

? ? ?What do fans of atmospheric post-punk music have in common with ancient barbarians? Not much. So why are both known as goths? Is it a weird coincidence or a deeper connection stretching across the centuries? The story begins in Ancient Rome. As the Roman Empire expanded, it faced raids and invasions from the semi-nomadic populations along its borders. Among the most powerful were a Germanic people known as Goths who were composed of two tribal groups, the Visigoths and Ostrogoths. While some of the Germanic tribes remained Rome's enemies, the Empire incorporated others into the imperial army. As the Roman Empire split in two, these tribal armies played larger roles in its defense and internal power struggles. In the 5th century, a mercenary revolt lead by a soldier named Odoacer captured Rome and deposed the Western Emperor. Odoacer and his Ostrogoth successor Theoderic technically remained under the Eastern Emperor's authority and maintained Roman traditions. But the Western Empire would never be united again. Its dominions fragmented into kingdoms ruled by Goths and other Germanic tribes who assimilated into local cultures, though many of their names still mark the map. This was the end of the Classical Period and the beginning of what many call the Dark Ages. Although Roman culture was never fully lost, its influence declined and new art styles arose focused on religious symbolism and allegory rather than proportion and realism.
? ? This shift extended to architecture with the construction of the Abbey of Saint Denis in France in 1137. Pointed arches, flying buttresses, and large windows made the structure more skeletal and ornate. That emphasized its open, luminous interior rather than the sturdy walls and columns of Classical buildings. Over the next few centuries, this became a model for Cathedrals throughout Europe. But fashions change. With the Italian Renaissance's renewed admiration for Ancient Greece and Rome, the more recent style began to seem crude and inferior in comparison. Writing in his 1550 book, "Lives of the Artists," Giorgio Vasari was the first to describe it as Gothic, a derogatory reference to the Barbarians thought to have destroyed Classical civilization. The name stuck, and soon came to describe the Medieval period overall, with its associations of darkness, superstition, and simplicity. But time marched on, as did what was considered fashionable. In the 1700s, a period called the Enlightenment came about, which valued scientific reason above all else. Reacting against that, Romantic authors like Goethe and Byron sought idealized visions of a past of natural landscapes and mysterious spiritual forces. Here, the word Gothic was repurposed again to describe a literary genre that emerged as a darker strain of Romanticism. The term was first applied by Horace Walpole to his own 1764 novel, "The Castle of Otranto" as a reference to the plot and general atmosphere. Many of the novel's elements became genre staples inspiring classics and the countless movies they spawned. The gothic label belonged to literature and film until the 1970s when a new musical scene emerged. Taking cues from artists like The Doors and The Velvet Underground, British post-punk groups, like Joy Division, Bauhaus, and The Cure, combined gloomy lyrics and punk dissonance with imagery inspired by the Victorian era, classic horror, and androgynous glam fashion. By the early 1980s, similar bands were consistently described as Gothic rock by the music press, and the stye's popularity brought it out of dimly lit clubs to major labels and MTV.?
? ? And today, despite occasional negative media attention and stereotypes, Gothic music and fashion continue as a strong underground phenomenon. They've also branched into sub-genres, such as cybergoth, gothabilly, gothic metal, and even steampunk. The history of the word gothic is embedded in thousands of years worth of countercultural movements, from invading outsiders becoming kings to towering spires replacing solid columns to artists finding beauty in darkness. Each step has seen a revolution of sorts and a tendency for civilization to reach into its past to reshape its present.

P9? ??A brief history of melancholy
? ? ? Sadness is part of the human experience, but for centuries there has been vast disagreement over what exactly it is and what, if anything, to do about it. In its simplest terms, sadness is often thought of as the natural reaction to a difficult situation. You feel sad when a friend moves away or when a pet dies. When a friend says, "I'm sad," you often respond by asking, "What happened?" But your assumption that sadness has an external cause outside the self is a relatively new idea. Ancient Greek doctors didn't view sadness that way. They believed it was a dark fluid inside the body. According to their humoral system, the human body and soul were controlled by four fluids, known as humors, and their balance directly influenced a person's health and temperament. Melancholia comes from melaina kole, the word for black bile, the humor believed to cause sadness. By changing your diet and through medical practices, you could bring your humors into balance. Even though we now know much more about the systems that govern the human body, these Greek ideas about sadness resonate with current views, not on the sadness we all occasionally feel, but on clinical depression. Doctors believe that certain kinds of long-term, unexplained emotional states are at least partially related to brain chemistry, the balance of various chemicals present inside the brain. Like the Greek system, changing the balance of these chemicals can deeply alter how we respond to even extremely difficult circumstances. There's also a long tradition of attempting to discern the value of sadness, and in that discussion, you'll find a strong argument that sadness is not only an inevitable part of life but an essential one. If you've never felt melancholy, you've missed out on part of what it means to be human.?
? ? ?Many thinkers contend that melancholy is necessary in gaining wisdom. Robert Burton, born in 1577, spent his life studying the causes and experience of sadness. In his masterpiece "The Anatomy of Melancholy," Burton wrote, "He that increaseth wisdom increaseth sorrow." The Romantic poets of the early 19th century believed melancholy allows us to more deeply understand other profound emotions, like beauty and joy. To understand the sadness of the trees losing their leaves in the fall is to more fully understand the cycle of life that brings flowers in the spring. But wisdom and emotional intelligence seem pretty high on the hierarchy of needs. Does sadness have value on a more basic, tangible, maybe even evolutionary level? Scientists think that crying and feeling withdrawn is what originally helped our ancestors secure social bonds and helped them get the support they needed. Sadness, as opposed to anger or violence, was an expression of suffering that could immediately bring people closer to the suffering person, and this helped both the person and the larger community to thrive. Perhaps sadness helped generate the unity we needed to survive, but many have wondered whether the suffering felt by others is anything like the suffering we experience ourselves. The poet Emily Dickinson wrote, "I measure every Grief I meet With narrow, probing Eyes - I wonder if it weighs like MIne - Or has an Easier size." And in the 20th century, medical anthropologists, like Arthur Kleinman, gathered evidence from the way people talk about pain to suggest that emotions aren't universal at all, and that culture, particularly the way we use language, can influence how we feel. When we talk about heartbreak, the feeling of brokenness becomes part of our experience, where as in a culture that talks about a bruised heart, there actually seems to be a different subjective experience. Some contemporary thinkers aren't interested in sadness' subjectivity versus universality, and would rather use technology to eliminate suffering in all its forms. David Pearce has suggested that genetic engineering and other contemporary processes cannot only alter the way humans experience emotional and physical pain, but that world ecosystems ought to be redesigned so that animals don't suffer in the wild. He calls his project "paradise engineering."?
? ? ?But is there something sad about a world without sadness? Our cavemen ancestors and favorite poets might not want any part of such a paradise. In fact, the only things about sadness that seem universally agreed upon are that it has been felt by most people throughout time, and that for thousands of years, one of the best ways we have to deal with this difficult emotion is to articulate it, to try to express what feels inexpressable. In the words of Emily Dickinson, "'Hope' is the thing with feathers - That perches in the soul - "And sings the tune without the words - And never stops - at all -"

P10? ?A brief history of numerical systems
? ? ?One, two, three, four, five, six, seven, eight, nine, and zero. With just these ten symbols, we can write any rational number imaginable. But why these particular symbols? Why ten of them? And why do we arrange them the way we do? Numbers have been a fact of life throughout recorded history. Early humans likely counted animals in a flock or members in a tribe using body parts or tally marks. But as the complexity of life increased, along with the number of things to count, these methods were no longer sufficient. So as they developed, different civilizations came up with ways of recording higher numbers. Many of these systems, like Greek, Hebrew, and Egyptian numerals, were just extensions of tally marks with new symbols added to represent larger magnitudes of value. Each symbol was repeated as many times as necessary and all were added together. Roman numerals added another twist. If a numeral appeared before one with a higher value, it would be subtracted rather than added. But even with this innovation, it was still a cumbersome method for writing large numbers. The way to a more useful and elegant system lay in something called positional notation. Previous number systems needed to draw many symbols repeatedly and invent a new symbol for each larger magnitude. But a positional system could reuse the same symbols, assigning them different values based on their position in the sequence. Several civilizations developed positional notation independently, including the Babylonians, the Ancient Chinese, and the Aztecs. By the 8th century, Indian mathematicians had perfected such a system and over the next several centuries, Arab merchants, scholars, and conquerors began to spread it into Europe. This was a decimal, or base ten, system, which could represent any number using only ten unique glyphs. The positions of these symbols indicate different powers of ten, starting on the right and increasing as we move left. For example, the number 316 reads as 6x10^0 plus 1x10^1 plus 3x10^2. A key breakthrough of this system, which was also independently developed by the Mayans, was the number zero. Older positional notation systems that lacked this symbol would leave a blank in its place, making it hard to distinguish Between 63 and 603, or 12 and 120. The understanding of zero as both a value and a placeholder made for reliable and consistent notation. Of course, it's possible to use any ten symbols to represent the numerals zero through nine. For a long time, the glyphs varied regionally. Most scholars agree that our current digits evolved from those used in the North African Maghreb region of the Arab Empire. And by the 15th century, what we now know as the Hindu-Arabic numeral system had replaced Roman numerals in everyday life to become the most commonly used number system in the world.?
? ? ?So why did the Hindu-Arabic system, along with so many others, use base ten? The most likely answer is the simplest. That also explains why the Aztecs used a base 20, or vigesimal system. But other bases are possible, too. Babylonian numerals were sexigesimal, or base 60. Any many people think that a base 12, or duodecimal system, would be a good idea. Like 60, 12 is a highly composite number that can be divided by two, three, four, and six, making it much Better for representing common fractions. In fact, both systems appear in our everyday lives, from how we measure degrees and time, to common measurements, like a dozen or a gross. And, of course, the base two, or binary system, is used in all of our digital devices, though programmers also use base eight and base 16 for more compact notation. So the next time you use a large number, think of the massive quantity captured in just these few symbols, and see if you can come up with a different way to represent it.