【TED ED 全英文文本】P41-P50合集
P41? ? Buffalo buffalo buffalo? One-word sentences and how they work
You may think you know the words that sit plainly in black on your page, but don't be fooled. Some words are capable of taking on different guises, masquerading as nouns, verbs and adjectives that alter their meanings entirely. This seeming superpower is called lexical ambiguity. It can turn words and sentences into mazes that mess with our minds. For example, consider the following: Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo. That may sound like nonsense, but it's actually a grammatically correct sentence. How? Well, Buffalo is proper noun, a noun, and a verb. It refers to an animal also known as a bison, an American city, and it can also mean to bully. These different interpretations create a sequence of words that is grammatically correct as it stands, though it helps to add in a few implied phrases and punctuation marks to reveal what's really going on. Buffalo buffalo are bison from the city of Buffalo, and this sentence has three groups of them. Group A, which is bullied by Group B, bullies Group C. In other words, bison from Buffalo that other bison from Buffalo bully also bully bison from Buffalo. If you let each buffalo perform its role, the meaning becomes apparent. What if the bunch of bullying buffalo decides to cross the ocean? Not just on any ship, but a ship-shipping ship shipping shipping-ships? That sentence sounds just as outrageous, but there's logic to the babble. Ship can mean a vessel and to transport. When we sub in those meanings, a clearer picture emerges. Here we have a huge ship-carrying vessel transporting ships that themselves are designed to carry goods across the sea. A ship-shipping ship, shipping shipping-ships. How about some entertainment on board this unusual vessel to offset the scuffling buffalo? Consider the can-can. Can-can can-can can can can can can-can. Here, the word can comes in many guises. There's can-can, the flamboyant dance, can, that means able to, and can, figuratively meaning to outperform. By sticking in a comma and including the implied meanings, this sentence becomes clearer. Can-can dances that can-can dances are able to outperform, can also outperform other can-can dances. You wouldn't necessarily use any of these sentences in a conversation. They're just too ridiculous. Yet they serve as an extreme example on just how tangled everyday language can be. Lexical ambiguities sail into our speech and writing all the time, spreading confusion and misunderstanding wherever they can-can.

P42? Can 100% renewable energy power the world?
Every year, the world uses 35 billion barrels of oil. This massive scale of fossil fuel dependence pollutes the Earth and it won't last forever. Scientists estimate that we've consumed about 40% of the world's oil. According to present estimates, at this rate, we'll run out of oil and gas in 50 years or so, and in about a century for coal. On the flip side, we have abundant sun, water, and wind. These are renewable energy sources, meaning that we won't use them up over time. What if we could exchange our fossil fuel dependence for an existence based solely on renewables? We've pondered that question for decades, and yet, renewable energy still only provides about 13% of our needs. That's because reaching 100% requires renewable energy that's inexpensive and accessible. This represents a huge challenge, even if we ignore the politics involved and focus on the science and engineering. We can better understand the problem by understanding how we use energy. Global energy use is a diverse and complex system, and the different elements require their own solutions. But for now, we'll focus on two of the most familiar in everyday life: electricity and liquid fuels. Electricity powers blast furnaces, elevators, computers, and all manner of things in homes, businesses, and manufacturing. Meanwhile, liquid fuels play a crucial role in almost all forms of transportation. Let's consider the electrical portion first. The great news is that our technology is already advanced enough to capture all that energy from renewables, and there's an ample supply. The sun continuously radiates about 173 quadrillion watts of solar energy at the Earth, which is almost 10,000 times our present needs. It's been estimated that a surface that spans several hundred thousand kilometers would be needed to power humanity at our present usage levels. So why don't we build that? Because there are other hurdles in the way, like efficiency and energy transportation. To maximize efficiency, solar plants must be located in areas with lots of sunshine year round, like deserts. But those are far away from densely populated regions where energy demand is high. There are other forms of renewable energy we could draw from, such as hydroelectric, geothermal, and biomasses, but they also have limits based on availability and location. In principle, a connected electrical energy network with power lines crisscrossing the globe would enable us to transport power from where it's generated to where it's needed. But building a system on this scale faces an astronomical price tag. We could lower the cost by developing advanced technologies to capture energy more efficiently. The infrastructure for transporting energy would also have to change drastically. Present-day power lines lose about 6-8% of the energy they carry because wire material dissipates energy through resistance. Longer power lines would mean more energy loss. Superconductors could be one solution. Such materials can transport electricity without dissipation. Unfortunately, they only work if cooled to low temperatures, which requires energy and defeats the purpose. To benefit from that technology, we'd need to discover new superconducting materials that operate at room temperature. And what about the all-important, oil-derived liquid fuels? The scientific challenge there is to store renewable energy in an easily transportable form. Recently, we've gotten better at producing lithium ion batteries, which are lightweight and have high-energy density. But even the best of these store about 2.5 megajoules per kilogram. That's about 20 times less than the energy in one kilogram of gasoline. To be truly competitive, car batteries would have to store much more energy without adding cost. The challenges only increase for bigger vessels, like ships and planes. To power a cross-Atlantic flight for a jet, we'd need a battery weighing about 1,000 tons. This, too, demands a technological leap towards new materials, higher energy density, and better storage. One promising solution would be to find efficient ways to convert solar into chemical energy. This is already happening in labs, but the efficiency is still too low to allow it to reach the market. To find novel solutions, we'll need lots of creativity, innovation, and powerful incentives. The transition towards all-renewable energies is a complex problem involving technology, economics, and politics. Priorities on how to tackle this challenge depend on the specific assumptions we have to make when trying to solve such a multifaceted problem. But there's ample reason to be optimistic that we'll get there. Top scientific minds around the world are working on these problems and making breakthroughs all the time. And many governments and businesses are investing in technologies that harness the energy all around us.

P43? ?Can a black hole be destroyed?
Black holes are among the most destructive objects in the universe. Anything that gets too close to the central singularity of a black hole, be it an asteroid, planet, or star, risks being torn apart by its extreme gravitational field. And if the approaching object happens to cross the black hole’s event horizon, it’ll disappear and never re-emerge, adding to the black hole’s mass and expanding its radius in the process. There is nothing we could throw at a black hole that would do the least bit of damage to it. Even another black hole won’t destroy it– the two will simply merge into a larger black hole, releasing a bit of energy as gravitational waves in the process. By some accounts, it’s possible that the universe may eventually consist entirely of black holes in a very distant future. And yet, there may be a way to destroy, or “evaporate,” these objects after all. If the theory is true, all we need to do is to wait. In 1974, Stephen Hawking theorized a process that could lead a black hole to gradually lose mass. Hawking radiation, as it came to be known, is based on a well-established phenomenon called quantum fluctuations of the vacuum. According to quantum mechanics, a given point in spacetime fluctuates between multiple possible energy states. These fluctuations are driven by the continuous creation and destruction of virtual particle pairs, which consist of a particle and its oppositely charged antiparticle. Normally, the two collide and annihilate each other shortly after appearing, preserving the total energy. But what happens when they appear just at the edge of a black hole’s event horizon? If they’re positioned just right, one of the particles could escape the black hole’s pull while its counterpart falls in. It would then annihilate another oppositely charged particle within the event horizon of the black hole, reducing the black hole’s mass. Meanwhile, to an outside observer, it would look like the black hole had emitted the escaped particle. Thus, unless a black hole continues to absorb additional matter and energy, it’ll evaporate particle by particle, at an excruciatingly slow rate. How slow? A branch of physics, called black hole thermodynamics, gives us an answer. When everyday objects or celestial bodies release energy to their environment, we perceive that as heat, and can use their energy emission to measure their temperature. Black hole thermodynamics suggests that we can similarly define the “temperature” of a black hole. It theorizes that the more massive the black hole, the lower its temperature. The universe’s largest black holes would give off temperatures of the order of 10 to the -17th power Kelvin, very close to absolute zero. Meanwhile, one with the mass of the asteroid Vesta would have a temperature close to 200 degrees Celsius, thus releasing a lot of energy in the form of Hawking Radiation to the cold outside environment. The smaller the black hole, the hotter it seems to be burning– and the sooner it’ll burn out completely. Just how soon? Well, don’t hold your breath. First of all, most black holes accrete, or absorb matter and energy, more quickly than they emit Hawking radiation. But even if a black hole with the mass of our Sun stopped accreting, it would take 10 to the 67th power years– many many magnitudes longer than the current age of the Universe— to fully evaporate. When a black hole reaches about 230 metric tons, it’ll have only one more second to live. In that final second, its event horizon becomes increasingly tiny, until finally releasing all of its energy back into the universe. And while Hawking radiation has never been directly observed, some scientists believe that certain gamma ray flashes detected in the sky are actually traces of the last moments of small, primordial black holes formed at the dawn of time. Eventually, in an almost inconceivably distant future, the universe may be left as a cold and dark place. But if Stephen Hawking was right, before that happens, the normally terrifying and otherwise impervious black holes will end their existence in a final blaze of glory.

P44? ?Can animals be deceptive?
A male firefly glows above a field on a summer’s night, emitting a series of enticing flashes. He hopes a nearby female will respond with her own lightshow and mate with him. Sadly for this male, it won’t turn out quite the way he plans. A female from a different species mimics his pulsing patterns: by tricking the male with her promise of partnership, she lures him in– and turns him into an easy meal. He’s been deceived. Behavioral biologists have identified three defining hallmarks of deception by non-human animals: it must mislead the receiver, the deceiver must benefit, and it can’t simply be an accident. In this case we know that the predatory firefly’s signal isn’t an accident because she flexibly adjusts her flash pattern to match males of different species. Based on this definition, where is animal deception seen in nature? Camouflage is a good starting point– and one of the most familiar examples of animal trickery. The leaf-tailed gecko and the octopus fool viewers by blending into the surfaces on which they rest. Other animals use mimicry to protect themselves. Harmless scarlet kingsnakes have evolved red, yellow, and black patterns resembling those of the venomous eastern coral snake to benefit from the protective warnings these markings convey. Even some plants use mimicry: there are orchids that look and smell like female wasps to attract hapless males, who end up pollinating the plant. Some of these animals benefit by having fixed characteristics that are evolutionary suited to their environments. But in other cases, the deceiver seems to anticipate the reactions of other animals and to adjust its behavior accordingly. Sensing a threat, the octopus will rapidly change its colors to match its surroundings. Dwarf chameleons color-match their environments more closely when they see a bird predator rather than a snake– birds, after all, have better color vision. One of the more fascinating examples of animal deception comes from the fork-tailed drongo. This bird sits atop tall trees in the Kalahari Desert, surveying the landscape for predators and calling when it senses a threat. That sends meerkats, pied babblers, and others dashing for cover. But the drongo will also sound a false alarm when those other species have captured prey. As the meerkats and babblers flee, the drongo swoops down to steal their catches. This tactic works about half the time– and it provides drongos with much of their food. There are fewer solid cases of animals using signals to trick members of their own species, but that happens too. Consider the mantis shrimp. Like other crustaceans, it molts as it grows, which leaves its soft body vulnerable to attack. But it’s still driven to protect its home against rivals. So it has become a masterful bluffer. Despite being fragile, a newly molted shrimp is actually more likely to threaten intruders, spreading the large limbs it usually uses to strike or stab its opponents. And that works – bluffers are more likely to keep their homes than non-bluffers. In its softened condition, a mantis shrimp couldn’t withstand a fight– which is why we can be confident that its behavior is a bluff. Biologists have even noticed that its bluffs are tactical: newly molted mantis shrimp are more likely to bluff against smaller rivals, who are especially likely to be driven away. It would seem that instead of just threatening reflexively, the mantis shrimp is swiftly gauging the situation and predicting others’ behavior, to get the best result. So we know that animals can deceive, but do they do so with intent? That’s a difficult question, and many scientists think we'll never be able to answer it. We can't observe animals’ internal thoughts. But we don’t need to know what an animal is thinking in order to detect deception. By watching behavior and its outcomes, we learn that animals manipulate predators, prey, and rivals, and that their capacity for deception can be surprisingly complex.

P45? Can machines read your emotions?
With every year, machines surpass humans in more and more activities we once thought only we were capable of. Today's computers can beat us in complex board games, transcribe speech in dozens of languages, and instantly identify almost any object. But the robots of tomorrow may go futher by learning to figure out what we're feeling. And why does that matter? Because if machines and the people who run them can accurately read our emotional states, they may be able to assist us or manipulate us at unprecedented scales. But before we get there, how can something so complex as emotion be converted into mere numbers, the only language machines understand? Essentially the same way our own brains interpret emotions, by learning how to spot them. American psychologist Paul Ekman identified certain universal emotions whose visual cues are understood the same way across cultures. For example, an image of a smile signals joy to modern urban dwellers and aboriginal tribesmen alike. And according to Ekman, anger, disgust, fear, joy, sadness, and surprise are equally recognizable. As it turns out, computers are rapidly getting better at image recognition thanks to machine learning algorithms, such as neural networks. These consist of artificial nodes that mimic our biological neurons by forming connections and exchanging information. To train the network, sample inputs pre-classified into different categories, such as photos marked happy or sad, are fed into the system. The network then learns to classify those samples by adjusting the relative weights assigned to particular features. The more training data it's given, the better the algorithm becomes at correctly identifying new images. This is similar to our own brains, which learn from previous experiences to shape how new stimuli are processed. Recognition algorithms aren't just limited to facial expressions. Our emotions manifest in many ways. There's body language and vocal tone, changes in heart rate, complexion, and skin temperature, or even word frequency and sentence structure in our writing. You might think that training neural networks to recognize these would be a long and complicated task until you realize just how much data is out there, and how quickly modern computers can process it. From social media posts, uploaded photos and videos, and phone recordings, to heat-sensitive security cameras and wearables that monitor physiological signs, the big question is not how to collect enough data, but what we're going to do with it. There are plenty of beneficial uses for computerized emotion recognition. Robots using algorithms to identify facial expressions can help children learn or provide lonely people with a sense of companionship. Social media companies are considering using algorithms to help prevent suicides by flagging posts that contain specific words or phrases. And emotion recognition software can help treat mental disorders or even provide people with low-cost automated psychotherapy. Despite the potential benefits, the prospect of a massive network automatically scanning our photos, communications, and physiological signs is also quite disturbing. What are the implications for our privacy when such impersonal systems are used by corporations to exploit our emotions through advertising? And what becomes of our rights if authorities think they can identify the people likely to commit crimes before they even make a conscious decision to act? Robots currently have a long way to go in distinguishing emotional nuances, like irony, and scales of emotions, just how happy or sad someone is. Nonetheless, they may eventually be able to accurately read our emotions and respond to them. Whether they can empathize with our fear of unwanted intrusion, however, that's another story.

P46? ?Can plants talk to each other
Can plants talk to each other? It certainly doesn't seem that way. Plants don't have complex sensory or nervous systems like animals do, and they look pretty passive, basking in the sun, and responding instinctively to inputs like light and water. But odd as it sounds, plants can communicate with each other. Just like animals, plants produce all kinds of chemical signals in response to their environments, and they can share those signals with each other, especially when they're under attack. These signals take two routes: through the air, and through the soil. When plant leaves get damaged, whether by hungry insects or an invading lawn mower, they release plumes of volatile chemicals. They're what's responsible for the smell of freshly cut grass. Certain kinds of plants, like sagebrush and lima beans, are able to pick up on those airborne messages and adjust their own internal chemistry accordingly. In one experiment, sagebrush leaves were deliberately damaged by insects or scissor-wielding scientists. Throughout the summer, other branches on the same sagebrush plant got eaten less by insects wandering through, and so did branches on neighboring bushes, suggesting that they had beefed up their anti-insect defenses. Even moving the air from above a clipped plant to another one made the second plant more insect-resistant. These airborne cues increase the likelihood of seedling survival, and made adult plants produce more new branches and flowers. But why would a plant warn its neighbors of danger, especially if they're competing for resources? Well, it might be an accidental consequence of a self-defense mechanism. Plants can't move information through their bodies as easily as we can, especially if water is scarce. So plants may rely on those airborne chemicals to get messages from one part of a plant to another. Nearby plants can eavesdrop on those signals, like overhearing your neighbor sneeze and stocking up on cold medicine. Different plants convey those warnings using different chemical languages. Individual sagebrush plants in the same meadow release slightly different sets of alarm chemicals. The makeup of that cocktail influences the effectiveness of communication. The more similar two plants' chemical fingerprints are, the more fluently they can communicate. A plant will be most sensitive to the cues emitted by its own leaves. But because these chemicals seem to be inherited, like human blood types, sagebrush plants communicate more effectively with relatives than with strangers. But sometimes, even other species can benefit. Tomato and tobacco plants can both decipher sagebrush warning signals. Plants don't have to rely solely on those airborne broadcasts. Signals can travel below the soil surface, too. Most plants have a symbiotic relationship with fungi, which colonize the plants' roots and help them absorb water and nutrients. These fungal filaments form extensive networks that can connect separate plants, creating an underground super highway for chemical messages. When a tomato plant responds to blight by acitvating disease-fighting genes and enzymes, signaling molecules produced by its immune system can travel to a healthy plant and prompt it to turn on its immune system, too. These advance warnings increase the plants chance of survival. Bean plants also eavesdrop on each other's health through these fungal conduits. An aphid investation in one plant triggers its neighbor to ramp up production of compounds that repel aphids and attract aphid-eating wasps. If you think of communication as an exchange of information, then plants seem to be active communicators. They're sending, receiving, and responding to signals without making a sound, and without brains, noses, dictionaries, or the Internet. And if we can learn to speak to them on their terms, we may gain a powerful new tool to protect crops and other valuable species. It all makes you wonder what else are we missing?

How does this music make you feel? Do you find it beautiful? Is it creative? Now, would you change your answers if you learned the composer was this robot? Believe it or not, people have been grappling with the question of artificial creativity, alongside the question of artifcial intelligence, for over 170 years. In 1843, Lady Ada Lovelace, an English mathematician considered the world's first computer programmer, wrote that a machine could not have human-like intelligence as long as it only did what humans intentionally programmed it to do. According to Lovelace, a machine must be able to create original ideas if it is to be considered intelligent. The Lovelace Test, formalized in 2001, proposes a way of scrutinizing this idea. A machine can pass this test if it can produce an outcome that its designers cannot explain based on their original code. The Lovelace Test is, by design, more of a thought experiment than an objective scientific test. But it's a place to start. At first glance, the idea of a machine creating high quality, original music in this way might seem impossible. We could come up with an extremely complex algorithm using random number generators, chaotic functions, and fuzzy logic to generate a sequence of musical notes in a way that would be impossible to track. But although this would yield countless original melodies never heard before, only a tiny fraction of them would be worth listening to. With the computer having no way to distinguish between those which we would consider beautiful and those which we won't. But what if we took a step back and tried to model a natural process that allows creativity to form? We happen to know of at least one such process that has lead to original, valuable, and even beautiful outcomes: the process of evolution. And evolutionary algorithms, or genetic algorithms that mimic biological evolution, are one promising approach to making machines generate original and valuable artistic outcomes. So how can evolution make a machine musically creative? Well, instead of organisms, we can start with an initial population of musical phrases, and a basic algorithm that mimics reproduction and random mutations by switching some parts, combining others, and replacing random notes. Now that we have a new generation of phrases, we can apply selection using an operation called a fitness function. Just as biological fitness is determined by external environmental pressures, our fitness function can be determined by an external melody chosen by human musicians, or music fans, to represent the ultimate beautiful melody. The algorithm can then compare between our musical phrases and that beautiful melody, and select only the phrases that are most similar to it. Once the least similar sequences are weeded out, the algorithm can reapply mutation and recombination to what's left, select the most similar, or fitted ones, again from the new generation, and repeat for many generations. The process that got us there has so much randomness and complexity built in that the result might pass the Lovelace Test. More importantly, thanks to the presence of human aesthetic in the process, we'll theoretically generate melodies we would consider beautiful. But does this satisfy our intuition for what is truly creative? Is it enough to make something original and beautiful, or does creativity require intention and awareness of what is being created? Perhaps the creativity in this case is really coming from the programmers, even if they don't understand the process. What is human creativity, anyways? Is it something more than a system of interconnected neurons developed by biological algorithmic processes and the random experiences that shape our lives? Order and chaos, machine and human. These are the dynamos at the heart of machine creativity initiatives that are currently making music, sculptures, paintings, poetry and more. The jury may still be out as to whether it's fair to call these acts of creation creative. But if a piece of art can make you weep, or blow your mind, or send shivers down your spine, does it really matter who or what created it?

P48? Can steroids save your life?
Steroids: they’re infamous for their use in sports. But they’re also found in inhalers, creams to treat poison ivy and eczema, and shots to ease inflammation. The steroids in these medicines aren’t the same as the ones used to build muscle. In fact, they’re all based on yet another steroid— one our body produces naturally, and we can’t live without. Taking a step back, the reason there are so many different steroids is because the term refers to substances with a shared molecular structure, rather than shared effects on the body. Steroids can be naturally occurring or synthetic, but what all steroids have in common is a molecular structure that consists of a base of four rings made of 17 carbon atoms arranged in three hexagons and one pentagon. A molecule must contain this exact arrangement to be a steroid, though most also have side chains— additional atoms that can dramatically impact the molecule’s function. Steroids get their name from the fatty molecule cholesterol. In fact, our bodies make steroids out of cholesterol. That fatty cholesterol base means that steroids are able to cross fatty cell membranes and enter cells. Within the cell, they can directly influence gene expression and protein synthesis. This is different from many other types of signaling molecules, which can’t cross the cell membrane and have to create their effects from outside the cell, through more complicated pathways. So steroids can create their effects faster than those other molecules. Back to the steroids in anti-inflammatory medications: all of these are based on a naturally occurring steroid called cortisol. Cortisol is the body’s primary stress signal, and it has a huge range of functions. When we experience a stressor— anything from a fight with a friend, to spotting a bear, to an infection or low blood sugar— the brain reacts by sending a signal from the hypothalamus to the pituitary gland. The pituitary gland then sends a signal to the adrenal glands. The adrenal glands produce cortisol, and release some constantly. But when they receive the signal from the pituitary gland, they release a burst of cortisol, which spurs the body to generate more glucose for energy, decrease functions not immediately related to survival, like digestion, and can activate a fight-flight-or-freeze response. This is helpful in the short term, but can cause undesirable side effects like insomnia and lowered mood if they last too long. Cortisol also interacts with the immune system in complex ways— depending on the situation, it can increase or decrease certain immune functions. In the process of fighting infection, the immune system often creates inflammation. Cortisol suppresses the immune system’s ability to produce inflammation, which, again, can be useful in the short term. But too much cortisol can have negative impacts, like reducing the immune system’s ability to regenerate bone marrow and lymph nodes. To prevent levels from staying high for too long, cortisol suppresses the signal that causes the adrenal glands to release more cortisol. Medicinal corticosteroids channel cortisol’s effects on the immune system to fight allergic reactions, rashes, and asthma. All these things are forms of inflammation. There are many synthetic steroids that share the same basic mechanism: they enhance the body’s cortisol supply, which in turn shuts down the hyperactive immune responses that cause inflammation. These corticosteroids sneak into cells and can turn off the “fire alarm” by suppressing gene expression of inflammatory signals. The steroids in inhalers and creams impact only the affected organ— the skin, or the lungs. Intravenous or oral versions, used to treat chronic autoimmune conditions like lupus or inflammatory bowel disease, impact the whole body. With these conditions, the body’s immune system attacks its own cells, a process analogous to a constant asthma attack or rash. A constant low dose of steroids can help keep this renegade immune response under control— but because of the negative psychological and physiological effects of longterm exposure, higher doses are reserved for emergencies and flare-ups. While an asthma attack, poison ivy welts, and irritable bowel syndrome might seem totally unrelated, they all have something in common: an immune response that’s doing more harm than good. And while corticosteroids won’t give you giant muscles, they can be the body’s best defense against itself.

P49? ?Can wildlife adapt to climate change
Rising temperatures and seas, massive droughts, changing landscapes. Successfully adapting to climate change is growing increasingly important. For humans, this means using our technological advancement to find solutions, like smarter cities and better water management. But for some plants and animals, adapting to these global changes involves the most ancient solution of all: evolution. Evolutionary adaptation usually occurs along time scales of thousands to hundreds of thousands of years. But in cases where species are under especially strong selective conditions, like those caused by rapidly changing climates, adaptive evolution can happen more quickly. In recent decades, we've seen many plants, animals, and insects relocating themselves and undergoing changes to their body sizes, and the dates they flower or breed. But many of these are plastic, or nonheritable changes to an individual's physical traits. And there are limits to how much an organism can change its own physiology to meet environmental requirements. That's why scientists are seeking examples of evolutionary changes coded in species' DNA that are heritable, long-lasting, and may provide a key to their future. Take the tawny owl. If you were walking through a wintry forest in northern Europe 30 years ago, chances are you'd have heard, rather than seen, this elusive bird. Against the snowy backdrop, its plumage would have been near impossible to spot. Today, the landscape is vastly different. Since the 1980s, climate change has led to significantly less snowfall, but you'd still struggle to spot a tawny owl because nowadays, they're brown. The brown color variant is the genetically dominant form of plumage in this species, but historically, the recessive pale gray variant triumphed because of its selective advantage in helping these predators blend in. However, less snow cover reduces opportunities for camouflage, so lately, this gray color variant has been losing the battle against natural selection. The offspring of the brown color morphs, on the other hand, have an advantage in exposed forests, so brown tawny owls are flourishing today. Several other species have undergone similar climate-change-adaptive genetic changes in recent decades. Pitcher plant mosquitoes have rapidly evolved to take advantage of the warmer temperatures, entering dormancy later and later in the year. Two spot ladybug populations, once comprised of equal numbers of melanic and non-melanic morphs, have now shifted almost entirely to the non-melanic color combination. Scientists think that keeps them from overheating. Meanwhile, pink salmon have adapted to warmer waters by spawning earlier in the season to protect their sensitive eggs. And wild thyme plants in Europe are producing more repellent oils to protect themselves against the herbivores that become more common when it's warm. These plants and animals belong to a group of about 20 identified species with evolutionary adaptations to rapid climate change, including snapping turtles, wood frogs, knotweed, and silver spotted skipper butterflies. However, scientists hope to discover more species evolving in response to climate change out of 8.7 million species on the planet. For most of our planet's astounding and precious biodiversity, evolution won't be the answer. Instead, many of those species will have to rely on us to help them survive a changing world or face extinction. The good news is we already have the tools. Across the planet, we're making on-the-ground decisions that will help entire ecosystems adapt. Critical climate refuges are being identified and set aside, and projects are underway to help mobile species move to more suitable climates. Existing parks and protected areas are also doing climate change check-ups to help their wildlife cope. Fortunately, it's still within our power to preserve much of the wondrous biodiversity of this planet, which, after all, sustains us in so many ways.

P50?? Can you find the next number in this sequence?
These are the first five elements of a number sequence. Can you figure out what comes next? Pause here if you want to figure it out for yourself. Answer in: 3 Answer in: 2 Answer in: 1 There is a pattern here, but it may not be the kind of pattern you think it is. Look at the sequence again and try reading it aloud. Now, look at the next number in the sequence. 3, 1, 2, 2, 1, 1. Pause again if you'd like to think about it some more. Answer in: 3 Answer in: 2 Answer in: 1 This is what's known as a look and say sequence. Unlike many number sequences, this relies not on some mathematical property of the numbers themselves, but on their notation. Start with the left-most digit of the initial number. Now, read out how many times it repeats in succession followed by the name of the digit itself. Then move on to the next distinct digit and repeat until you reach the end. So the number 1 is read as "one one" written down the same way we write eleven. Of course, as part of this sequence, it's not actually the number eleven, but 2 ones, which we then write as 2 1. That number is then read out as 1 2 1 1, which written out we'd read as one one, one two, two ones, and so on. These kinds of sequences were first analyzed by mathematician John Conway, who noted they have some interesting properties. For instance, starting with the number 22, yields an infinite loop of two twos. But when seeded with any other number, the sequence grows in some very specific ways. Notice that although the number of digits keeps increasing, the increase doesn't seem to be either linear or random. In fact, if you extend the sequence infinitely, a pattern emerges. The ratio between the amount of digits in two consecutive terms gradually converges to a single number known as Conway's Constant. This is equal to a little over 1.3, meaning that the amount of digits increases by about 30% with every step in the sequence. What about the numbers themselves? That gets even more interesting. Except for the repeating sequence of 22, every possible sequence eventually breaks down into distinct strings of digits. No matter what order these strings show up in, each appears unbroken in its entirety every time it occurs. Conway identified 92 of these elements, all composed only of digits 1, 2, and 3, as well as two additional elements whose variations can end with any digit of 4 or greater. No matter what number the sequence is seeded with, eventually, it'll just consist of these combinations, with digits 4 or higher only appearing at the end of the two extra elements, if at all. Beyond being a neat puzzle, the look and say sequence has some practical applications. For example, run-length encoding, a data compression that was once used for television signals and digital graphics, is based on a similar concept. The amount of times a data value repeats within the code is recorded as a data value itself. Sequences like this are a good example of how numbers and other symbols can convey meaning on multiple levels.