For most of the year, Iceberg Alley is gray and cold. The largest city on its shores, St. John’s, is known as “Canada’s Weather Champion.” Among major Canadian cities, the capital of Newfoundland and Labrador is the snowiest, windiest, wettest, and cloudiest, enjoying fewer than 1,500 hours of sunshine each year. Seattle, for comparison, gets 2,200 hours of sun annually. St. John’s is so overcast, the difference between it and Seattle is greater than that between Seattle and Tampa. But for a few months, from May to August, the sun breaks through the clouds and warms the freezing waves swirling off the coast.
In this brief window, when Newfoundland relishes nearly half of the sunshine it will absorb for the entire year, icebergs fill the Labrador Sea. The Arctic ice pack undergoes its seasonal melt and Baffin Bay thaws, allowing the frozen mountains to continue their journey toward the Atlantic. Most break off of glaciers on the west coast of Greenland—what glaciologists call “calving.” Speakers of a variety of languages, from Afrikaans to Uzbek, use the same word to define the process, as if the icy masses are the living offspring of glaciers. In Albanian, Farsi, and Italian, it is even more explicit: Glaciers “give birth.” Across cultures and languages, icebergs are conceptualized like wild cattle or horses roaming the maritime frontier in our rhetorical imagination. It is no wonder, then, that the International Ice Patrol and Canadian Ice Service describe the summertime influx of icebergs as an annual “migration.”
The final stage of an iceberg’s life is especially volatile and difficult to predict.
This is when iceberg cowboys head to sea. These rough-and-tumble mariners earn their living wrangling icebergs—sometimes to subdue and capture the leviathans, other times to herd the ice in new directions. They are undaunted by warnings issued by the International Ice Patrol. To that end, the brave seafarers make bad role models for academics interested in icebergs and ship captains navigating the North Atlantic. The iceberg cowboys, however, give us a hint of what it will take to harvest icebergs if we are ever going to use them to save the planet. They show us that these frozen beasts are not entirely unapproachable.
One million years ago, when mastodons roamed the Atlantic Coastal Plain in what is today New London, Connecticut, it was snowing in Greenland. We know this because glaciologists have taken ice cores from the center of the country. Using a drill that cuts a circle around a central point, they extract long, skinny tubes of ice. The process can take years, especially in a place like Greenland, where the ice sheet is more than 2 miles thick and drilling can only be done during the summer. Back in the lab, the glaciologists then analyze the ice to identify the particulates and chemicals captured by the falling snow. Using this information, they are able to reconstruct past climates and determine local temperature, greenhouse gas concentrations, and volcanic and solar activity.
The Greenland ice sheet was likely first formed some 3 million years ago, but it has grown and shrunk as the planet has warmed and cooled. It currently stretches 1,500 miles long by 680 miles wide and covers 80 percent of Greenland. Because most of the ice sheet is ringed by mountains, glaciers that lie along the coast like Sermeq Kujalleq function as outlets through which ice and water from the ice sheet come gushing out. Like rivers, glaciers are constantly flowing, pushed forward by the weight of their own ice. Sermeq Kujalleq, also known as Jakobshavn Glacier, travels an average of 130 feet in 24 hours and calves more than 11 cubic miles of icebergs each year into the Ilulissat Icefjord. Owing to this constant growth and shrinking, the oldest remaining known ice in the Greenlandic ice sheet today dates from the Pleistocene epoch, 1 million years ago.
Glaciers form when snow builds up and is buried each year by more and more snow. To understand the process, it helps to think of the snowball fights you might have had as a child. At normal atmospheric pressure, ice melts at 32 degrees Fahrenheit. Adding pressure, however, can lower the temperature at which this transformation, or solid-to-liquid phase change as we’re taught in chemistry class, occurs. Imagine squeezing a fistful of powdery snow. The force of your clasp manages to melt the snow just a bit. When you open your fingers and release the pressure, the snow refreezes into a harder, more solid lump. During a snowball fight, you might repeat this process a few times, adding a bit more fresh snow to the ball in your hand each time, until you create the perfect ammunition. In Greenland during the Pleistocene epoch, as more and more snow piled up, the ice crystals were compressed and recrystallized over centuries, forming rock-hard, crystal-clear glaciers.
To get to this ancient ice, we don’t always have to rely on glaciologists and their specialized drills. Instead, we can wait for icebergs to calve from glaciers and come to us; after all, they can travel thousands of miles before melting. For this reason, the famed Scottish geologist Charles Lyell believed icebergs transported boulders around the world. His dear friend Charles Darwin agreed, further suggesting that icebergs were responsible for the dispersion of species. While this theory has since been debunked, the 19th-century naturalists were correct that icebergs can act as transport vessels, bringing artifacts and stories from the past with them. In addition to chemicals and air bubbles trapped in snow, plants and animals can get caught on top of glaciers and become part of the historical record. Sometimes the findings are unbelievable. In the 19th century, several people reported seeing wooly mammoths frozen inside icebergs. Their shaggy hair and long, curled tusks were purportedly cryogenically preserved inside the ice. Another legend tells of a fully clad Viking encased in an iceberg, still gripping his spear and shield. Every time an iceberg floats by, an ancient piece of the past is carried along with it.
Glaciologists talk about “singing” icebergs, too.
Iceberg ice is not just primeval; it is also pristine. As snowflakes cascade through the atmosphere, they can catch sundry gaseous and particulate matter thanks to their latticework structures. Snow falling today, for instance, might absorb black carbon, mercury, formaldehyde, pesticides, and vehicle exhaust. Some scientists consequently advise children not to eat snow in urban areas. Thousands of years ago, however, the same pollutants were not floating around our atmosphere, so ancient glacial ice is comparatively immaculate. It has far fewer parts per million of impurities than most tap waters contain today. When glaciers calve, this unpolluted ice is packaged in a tidy parcel and launched into the salty sea.
Once an Arctic iceberg calves, it will typically live for three to six years, depending on its size and the journey it takes. An iceberg headed to the equator will melt faster than one grounded near the poles. Cycles of thawing and refreezing further create crevasses within the iceberg that can cause a berg to crumble or explode and spawn further icebergs. The final stage of an iceberg’s life is especially volatile and difficult to predict. Scientists struggle to precisely foresee the path an iceberg might follow since the shape and size of an iceberg, ocean currents, winds, and waves affect a berg’s velocity. In Iceberg Alley, the average drift speed is about 0.5 mph, though some bergs zip along at more than 2 mph. At this point in their lives, Arctic icebergs are prone to capsizing since increased erosion over time results in a loss of stability. In an iceberg’s death throes, as some glaciologists describe it, the blocks are also noisy. Icebergs groan and sputter as they break up over open water. Glaciologists talk about “singing” icebergs, too. When bergs scrape the seabed or rub against each other they can vibrate and emit chilling harmonic tremors, like running a finger around the rim of a wine glass. Perhaps it is to be expected that we talk about icebergs like they are alive. Iceberg cowboys try to capture these creatures as they migrate through Iceberg Alley, before they die and all that unspoiled freshwater mixes with the ocean.
Ed Kean is a fifth-generation fisherman known abroad as “Captain Ahab of the Ice.” To my ears, his speech sounds like the mix of a cheery Irishman and a grunting whaler too busy or too weary to open his mouth any wider. Despite Kean’s easygoing attitude, he is acutely aware of the danger posed by icebergs that float in the waters off the coast of Newfoundland. Many calved from the very same glacier, Sermeq Kujalleq, that produced the frozen torpedo that sank the Titanic just 320 nautical miles away. But the risk is worth it to continue living off the sea. As the iceberg cowboy tells it, his family used to harvest ice from icebergs to cool fish that they brought up to the plants in Labrador. In the 1970s they had the ice tested at Memorial University of Newfoundland to ensure it was safe. In the process, they learned that it was exceptionally pure. In the 1990s Kean began harvesting ice commercially, and he has been at it ever since.
Icebergs “taste like water should taste,” he explains. That’s why they are worth wrangling. According to Kean, he and his wife drink several liters of iceberg water a day and his mother “will only drink iceberg water.” In St. John’s, when I tell people I’m interested in iceberg harvesting, they almost instantly reference Ed Kean, who is something of a celebrity here. The global COVID-19 pandemic spoiled my plan to sail with Kean, so I chat with him about icebergs while we’re both standing on solid land. Proudly, the fisherman also directs me to videos online that document his work. By all accounts, it is arduous labor. Some years, Kean sells more than 1 million liters of the specialty water.
On days he harvests icebergs, which is almost every day in the short migration season, Kean is awake by dawn with a small crew. Using a satellite map, Kean identifies icebergs and motors his large fishing boat toward them. Once he sees the ice, he can select his approach. If the berg is grounded and sufficiently stable, he can carefully maneuver his accompanying barge, outfitted with a crane, near the ice. Then, using a hydraulic clamp attachment, he and his crew scoop up the ice and feed it into a large grinder on the ship. The ice is shredded and piped into storage tanks.
More commonly, the iceberg cowboy cannot get so close to the ice and harvesting looks a lot more like battling a wild animal. Because the ice can flip and roll unpredictably, Kean must approach cautiously. Plus, icebergs have treacherous underwater projections or “legs” Kean names them, that could damage his ship. Occasionally, his first move is to whip out a rifle. He shoots icebergs in hopes of splintering off a chunk. “Sometimes it works, sometimes it doesn’t.” Kean tells me that he is moving away from firearms now, though. The crew next leaves the big rusty rig behind and hops into a small motorboat to come near the ice. They circle the iceberg looking for smaller pieces that have broken off. Even these chunks can weigh more than a ton.
The frozen surface is not a static island, but an enormous, peripatetic rock-hard reef.
The next part comes right out of a rodeo. The cowboys use long wooden poles tipped with metal hooks to prod and jostle the ice closer to the boat. Once the iceberg is well positioned, the men fling a net over the ice and wrap it tight. The brute is then dragged back to the vessel, where a crane is lowered to affix a hook to the net. The ice is winched aboard and dropped onto the deck. The ice is rinsed, and now the backbreaking work begins. Using axes, the crew hacks into the ice, chipping it into smaller chunks. These are then shoveled into 1,000-liter plastic storage containers where they will melt. Kean says he has difficulty maintaining a crew for many seasons since the work is so difficult. Maybe I should be grateful that I don’t have to participate in this rodeo.
Since I don’t get the opportunity to play cowboy in St. John’s, I decide to be a tourist instead. I board a ship from St. John’s Port Authority to see an iceberg. If harvesting icebergs is like herding cattle, sightseeing for icebergs is itself a bit like being on safari. My tour operator even describes “hunting” for icebergs on the open sea. Our target is dangerous and unpredictable, capable of lashing out at any moment. For that reason, the provincial government of Newfoundland and Labrador has published viewing guidelines. We should keep a distance equal to the length of an iceberg or twice its height, whichever is greater. Anything within this perimeter is considered the “danger zone,” in which viewers are exposed to falling ice, large waves, and submerged hazards.
As with lions and leopards, it can also be hard to tell where or when our mark might appear. “Who knows what nature is going to give us,” my guide explains. Luckily, I get to see icebergs. The first we encounter rises out of the ocean like a smooth boulder. A cotillion of terns is settled at the peak of the dome’s gentle curves, and for a moment, the ice seems passive and gentle. Then a wave crashes into the mass and I am reminded that the frozen surface is not a static island, but an enormous, peripatetic rock-hard reef. As the boat inches closer, I can see the slippery surface continue below the waves, stretching into the dark ocean. I am transported by the alien sight up close. The floating orb is like a puffy white flying saucer cutting through the water.
The next iceberg we spy is more spectacular, sparkling like a crystal castle in the bright sun. Two pinnacles, like turret-adored towers, loom 50 feet over our ship. They are connected with a low ice wall that reminds me of a crenelated parapet. I ask one of the guides if she thinks we can sail nearer. “That’s up to the captain,” she answers, “but he probably won’t get closer because this berg isn’t grounded.” Salt water lashes the ice from every angle, like the ocean is a great artist carving a magnificent sculpture from a giant block. The ice dwarfs our boat even from our safe distance. When I finally look down, I am mesmerized by the aquamarine hue radiating from the submarine foundation. Intellectually, I know that around 90 percent of an iceberg’s mass is hidden. But there is so much ice bobbing on the waves, I cannot fathom the true size of this creature.
Technically, I’ve seen a domed and a pinnacled iceberg. Glaciologists distinguish between six different types. The categories are helpful because different shaped bergs have greater or lesser height-to-drift ratios and drift speed as a percentage of wind speed. Some varieties are also more attractive for harvesting, depending on whom you ask. Domed icebergs are smooth and rounded on top. A blocky iceberg has a mostly flat top and steep vertical sides. Like the name implies, wedge icebergs have one thick end that tapers to a thin edge that disappears beneath the water. Pinnacled icebergs can be shaped like pyramids or consist of several spires that soar above the bulk of the mass. Similarly, dry-dock icebergs have two points like a giant letter U, the base of which is at water level so a boat could float on top of the berg between the towers. Lastly, a tabular iceberg has a flat surface because it usually calves from an ice sheet or shelf glacier. Most come from Antarctica, though some originate from Greenland and the ice caps of Arctic islands. These bergs can be many kilometers in length and width—far greater than the powerful ice I witness in Newfoundland.
Back on land, I head to raucous George Street to be “screeched-in.” The cheeky local tradition begins with a supplication. I introduce myself and say I’d like to be a Newfoundlander. Next, I am meant to recite a vow. It is impenetrable to my daft ears, so I mumble along and jump to the next step, taking a shot of rum. It goes down easier than the final act: kissing a cod. I am again grateful the pandemic has changed my plans, since I get to peck the fish through a mask. I smooch the frozen cod and quickly order an Iceberg beer. Brewed up the street at Quidi Vidi Brewery, the smooth lager advertises itself as being “made with pure 20,000-year-old iceberg water.” The beer is refreshing, but I cannot notice anything special about it. Admittedly, I am no connoisseur, and I’ve just had a fish in my face.
While I drink the beer, I think of Ed Kean. His sobriquet, Captain Ahab of the Ice, is unfitting. At the end of Moby-Dick, Herman Melville’s fictional captain is drowned in his monomaniacal attempt to subdue the white whale. In St. John’s, the lager in my hand is a sign of Kean’s success. He has vanquished icebergs and lived to tell about it.
Matthew Birkhold is an associate professor in the department of Germanic languages and literatures at The Ohio State University, and the author of Chasing Icebergs: How Frozen Freshwater Can Save the Planet. His writing has appeared in The New York Times, The Atlantic, Foreign Affairs, The Washington Post, The Paris Review, and Indian Country Today.
Excerpted from Chasing Icebergs: How Frozen Freshwater Can Save the Planet by Matthew Birkhold. Published by Pegasus Books, 2023 .
Lead image: Bparsons98 / Shutterstock
The post The Iceberg Cowboys Who Wrangle the Purest Water on Earth appeared first on Nautilus.
George Church looks like he needs a nap. I’m talking to him on Zoom, and his eyelids have grown heavy, inclining toward slumber. Or maybe my mind is playing tricks on me. He assures me he is wide awake. But sleeping and waking life are often blurred for Church. One of the world’s most imaginative scientists, Church is a narcoleptic.
A rare disorder, narcolepsy causes sudden attacks of sleep, and Church has fallen asleep in some unfortunate circumstances—at The World Economic Forum, just a few feet away from Microsoft founder Bill Gates, for instance. He also had to give up driving due to the risk that a bout of sleepiness will strike while he is behind the wheel. But Church, a Harvard geneticist known for his pathbreaking contributions to numerous fields—from genetics to astrobiology to biomedicine—says the benefits of his condition outweigh the inconveniences. Many of his wildest and most prescient ideas come from his narcoleptic naps.
“The fact is, I fall asleep several times a day, and so almost everything comes from there,” Church says. His idea for a quick and simple way to “read” DNA—which resulted in the first commercial genome sequence, of the human pathogen H. pylori—came from a narcoleptic nap. He also conceived of editing genomes with a method analogous to CRISPR, and building new genomes with off-the-shelf molecules, during narcoleptic naps. More recently, in December, a wild idea for a space probe that could reach distant stars within just 20 years, at one-fifth the speed of light, came to him after a narcoleptic nap. He proposed that these lightning-speed interstellar missions could be launched by microbes and powered by laser sails. The ideas that come to him are often the result of collisions of unexpected images in his head. “I try to turn science fiction into science fact,” Church tells me.
The relationship between sleep, dreaming, and creativity has been the subject of conjecture for hundreds of years. Reports of creative inspiration and discoveries made by artists, inventors, and scientists while dreaming suggest these states of mind are intimately bound together. The symbolist poet Saint-Pol-Roux was known to guard his sleep at night with a sign on his bedroom door that read “Do not disturb: Poet at work.” Russian scientist Dimitri Mendeleev reportedly had a vision of the periodic table in a dream after three days of exhaustive effort (though it may have just been the perfection of an idea he had while awake). Stephen King claims he dreamt up his novel Misery during a somnolent transatlantic flight.
Rather than leave such inspirations to chance, American inventor Thomas Edison designed a strategy for mining his dreams for material.1 He would doze off with a steel ball in each hand. Once his body went limp with sleep, the balls would drop to the floor with a clatter and wake him up. He could then recall details of his dreams and record any insights. This approach was later adopted by many other creative giants, including inventor Nikola Tesla, surrealist painter Salvador Dalí, and Romantic writer and poet Edgar Allan Poe.
This shadowy frontier between waking and dreaming may be the source of humanity’s most novel ideas.
Scientific studies seem to validate these tales. Study participants asked to “incubate” a problem in their dreams often come up with a useful solution, and both the frequency and complexity of one’s dream recall have been correlated with higher scores on creativity evaluations.2 The stage of sleep most closely associated with creative inspiration is known as REM, short for rapid eye movement. REM sleep begins about 70 minutes after a person loses consciousness and is rich with dream life. Lucid dreams, in which the dreamer knows he or she is dreaming and can sometimes direct the dream, are thought to primarily occur in REM.3 Waking from REM sleep has been shown to improve study subjects’ ability to solve anagrams—a word, phrase or name formed by rearranging the letters of another—and to puzzle out problems that require making associations between loosely related ideas.2
But researchers have recently identified another state of mind that lies in the transition between waking and sleeping and may be even more fertile for creative inspiration than REM. It is called N1 or sleep onset, and it is the first of three stages in pre-REM sleep. People with narcolepsy frequently fall into and out of N1 during daytime naps, giving them much greater access to these borderland perceptual states than normal sleepers.1
N1 is a hybrid, or “semilucid” state of mind, says French neuroscientist Celia Lacaux, when individuals are just beginning to detach from the waking environment. It is a mental twilight that allows one to “freely watch the mind wander while maintaining a logical ability to identify creative sparks,” says Lacaux. This shadowy frontier between waking and dreaming, to which all sleepers have access, may be the source of many of humanity’s most novel ideas, inventions, and works of art. Psychologists call it “hypnagogia,” after the Greek words for “sleep” (hypnos) and “to lead” (agogo). The French sometimes refer to it as “entre chien et loup,” literally “between dog and wolf.”
The more symptoms of narcolepsy subjects had, the greater their creativity.
Like REM sleep, N1 often features involuntary dream-like perceptual phenomena. These are known as hypnagogic hallucinations, and they combine details from recent waking experiences with loosely associated memories in novel or unusual ways. The difference is that in N1, the dreamer is closer to the surface of sleep, to conscious control and to the external perceptual environment.4
“Hypnagogia happens to be a time period where you are much more subject to outside influence and where you’re doing much more auditory processing and where your dream recall rates are much higher,” says Adam Haar, a dream researcher at MIT Media Lab. It is characterized by phenomenological unpredictability, distorted perception of space and time, and spontaneous, fluid idea association.
A relationship between hypnagogia and creativity makes intuitive sense. One major theory of creativity posits that it results when our minds make connections between distantly related concepts stored in our memories.5 This is a process that is thought to occur naturally during sleeping and dreaming: New memories mingle in novel and abstract ways with older ones as a means of consolidating them, laying down tracks in our brains for later recollection. Neuroscientist Karl Friston, who studies consciousness, proposes that this mashing together of old and new is a process that helps to minimize redundancy and complexity in our memory system, and prepares us to navigate a fuller range of possible scenarios in our waking lives.2
But wild associations between remote ideas and memories are, by themselves, not sufficient for creativity to flourish. Truly creative ideas are not just novel, but useful,6 so creative cognition must also include processes of evaluation and discrimination.7 Of course, the sharper and more discriminating the mind, the more interesting and diverse the bank of ideas that mind will have to pull from, and the more brilliant the insights. (“Chance favors only the prepared mind,” as Louis Pasteur, one of the founders of microbiology, is reported to have said.) Evaluation and discrimination are executive processes that require some conscious control and typically occur when one is awake.8
A few years ago, Lacaux, who works at a treatment center for narcoleptic patients in Paris, decided to test a hunch that narcoleptics are more creative than the general population, given their unique access to hybrid states between sleeping and waking. She recruited 185 narcolepsy patients and 126 normal sleeper controls from France and Italy and drilled them on measures of creativity and creative achievement. Study participants were asked, for example, to come up with as many ideas as possible related to a verbal or visual prompt, including inventing endings to a story and generating drawings incorporating a particular shape. They were also asked to weave multiple pre-selected elements, both abstract and concrete, into a unique original story and drawing.
The belief in one’s powers of creativity becomes self-fulfilling.
Lacaux and her colleagues found that, in general, and over time, patients with narcolepsy scored higher on all measures of creativity in standard evaluations, but only a few of them put this creative potential to use in career or achievement-oriented ways. Her narcoleptic subjects scored higher in all categories—visual, verbal, abstract and concrete, convergent and divergent modes of thinking. The more symptoms of narcolepsy subjects had—all of them hybrid states between wakefulness and sleep, such as sleep paralysis, waking hallucinations, or dream enactment—the greater their creativity.2
In a subsequent study, Lacaux set out to examine whether short bouts of N1 sleep, so common among narcoleptics, were uniquely associated with increased creativity. She presented 103 normal sleepers with a series of mathematical problems that could be almost instantly solved with a hidden rule. Then, in an exercise inspired by Edison, the study subjects were asked to take a 20-minute break, relaxing with their eyes closed, an object in their right hands, while their brains were monitored with electrodes. Lacaux and her colleagues found that as little as 15 seconds in N1 sleep tripled the chance that participants would have a moment of creative insight after the break and discover the rule, compared to participants who remained awake. If participants fell past N1 sleep into N2, a deeper stage of pre-REM sleep, this creative sweet spot was lost.1
“It was a bit of a confirmation of Thomas Edison’s story,” says Lacaux, “because Thomas had a feeling that sleep onset was great, but that he needed to wake up from that moment, not go into deeper sleep, or he would lose the beneficial effect on creativity.”
Church testifies to the unique power of short bouts of sleep. For a period, he was recording the long dream narratives that came out of his nighttime sleep, but he primarily found the practice time-consuming, as opposed to offering inspiration for his work. He also went through a phase where he was able to control his dreams. His favorite dream activity was flying, but just for the thrill of it. It didn’t yield any creative insights. “The recording of the dreams, which I did for several months, and the guiding my dreams, which I did for a couple of years, never contributed anything useful,” he says.
During his daytime naps, Church often notices that his dream state is mingled with the external world, that the images he sees in his dreaming mind are blended with everyday life. He might, for instance, see the face of the person he is talking to superimposed on his dream and have difficulty knowing whether what he is seeing is reality or not. “The advantage of the dream state is that you think of juxtapositions that you wouldn’t think of if you’re just thinking logically,” he tells me.
Hypnagogic hallucinations have been shown to make people believe they are inherently creative, according to a 2020 study, and this self-belief is actually among the most important predictors of creative performance and achievement. “In narcoleptic patients there is an association between self-perception of creativity and hypnagogic phenomena,” the authors of the study wrote. The belief in one’s powers of creativity becomes self-fulfilling.9
Despite the real dangers of falling asleep during the day—while cooking, say, or walking down the street—Church has chosen not to take medications that could remedy his daytime sleepiness. He wants to continue to heed the ideas that come to him during naps. “Almost all of our projects have, at one point or another, been described as impossible or useless,” he says. “But they tend to work out.”
Lead image: Art Furnace / Shutterstock
1. Lacaux, C., et al. Sleep onset is a creative sweet spot. Science Advances 7, eabj5866 (2021).
2. Lacaux, C., et al. Increased creative thinking in narcolepsy. Brain 142, 1988-1999 (2019).
3. Stumbrys, T. & Erlacher, D. Lucid dreaming during NREM sleep: Two case reports. International Journal of Dream Research 5 151-155 (2012).
4. Horowitz, A., Esfahany, K., Gálvez, T.V., Maes, P., & Stickgold, R. Targeted dreaming increases waking creativity. Current Biology (2022).
5. Benedek, M. & Neubauer, A.C. Revisiting Mednick’s Model on creativity-related differences in associative hierarchies. Evidence for a common path to uncommon thought. Journal of Creative Behavior 47, 273-289 (2013).
6. Diedrich, J., Benedek, M., Jauk, E., & Neubauer, A.C. Are creative ideas novel and useful? Psychology of Aesthetics, Creativity and the Arts 9, 35-40 (2015).
7. Beaty, R.E., Silvia, P.J., Nusbaum, E.C., Jauk, E., & Benedek, M. The roles of associative and executive processes in creative cognition. Memory & Cognition 42, 1186-1197 (2014).
8. Fogel, S., et al. While you were sleeping: Evidence for high-level executive processing of an auditory narrative during sleep. Consciousness and Cognition 100, 103306 (2022).
9. D’Anselmo, A., et al. Creativity in narcolepsy type 1: The role of dissociated REM sleep manifestations. Nature Science Sleep 12, 1191-1200 (2020).
Silence. Eerie, unnerving silence.
Despite all our work, all our straining efforts to hear a whisper from the void, that’s all we have. Silence.
More than 60 years ago, the pioneering radio astronomer Frank Drake and his colleagues laid the groundwork for what astronomers around the world would transform into an ambitious idea: SETI, the Search for Extraterrestrial Intelligence. The rationale was simple. As humanity progressed in its technological development, we eventually struck upon the idea of radio communications. These waves of electricity and magnetism weren’t only capable of wrapping messages around our globe, they could also pierce into the depths of the galaxy itself. And if any other intelligent beings were somewhere out there, they would likely be creating their own radio chatter.
The deafening silence so far suggests that we are alone.
This quest for extraterrestrial intelligence continues today, largely ignored by professional astronomers and kept alive in large part by generous private donations. Despite the extraterrestrial cold shoulder, proponents of SETI (by which I mean the general field, not specifically the institute by that name) argue that our efforts thus far have barely scratched the surface, hemmed by using radio dishes to scan a relatively miniscule band of radio wavelengths in tiny slivers of the sky, for brief periods of time, among just a small bubble of nearby stars.
All life, even humble, simple single-celled organisms, can be loud in its own way.
Perhaps it’s been quiet because intelligence is not bound to follow the same technological track as us—cultural development, just like evolution, has no prescribed course after all. Or perhaps other civilizations only briefly broadcast in radio signals before switching to other, more targeted and efficient methods of communication. Perhaps the gulfs of time and space separating intelligences have simply been too vast to cross yet.
Or perhaps we truly are alone.
But the likely answer is that most other life forms out there don’t meet the capital “I” of SETI’s target: “intelligence.” This approach of listening for alien peers, which dates back to the 19th century, was founded on the idea that intelligent life is loud and messy, broadcasting its existence—even unintentionally—for any careful listener to discover. And those noises, those disruptions in the expected patterns of the universe, could theoretically be easy for us to spot—even with relatively rudimentary 20th-century radio telescopes.
But all life, even humble, simple single-celled organisms, can be loud in its own way. And with new and near-term technology, we are now better poised than ever to detect even the simplest biology far, far afield. Not radio blasts, but subtle signatures. The traces not of equals, but of any life.
So the time has come to SETL: Search for Extraterrestrial Life.
Indeed, the SETL program (which by no means is an official or even community-recognized acronym, just a joke I never get tired of telling) has, in the past decade, become among the fastest growing areas in astronomy, combining the latest insights from astrophysics, chemistry, and biology to try to find any signs of life whatsoever in an alien world. (Indeed, even the SETI Institute itself has now included it in their portfolio of research.)
Without the loudness of alien technological signals, it seems like a hopeless pursuit to search for biological whispers among the approximately one trillion exoplanets in the Milky Way. The ultimate needle in the cosmic haystack.
But the SETL approach has two related advantages over one searching for technologically advanced civilizations. One, the likelihood of success is much higher. It stands to reason that intelligent life would be much rarer than simpler life. Life has existed on our own planet for about as long as we’ve had a planet, and we only developed stone hand axes, let alone radio technology, basically yesterday. So there are probably many more worlds out there teeming with some form of life, making those planets an easier catch.
Second, one of the great hallmarks of life in any form is its ability to completely mess up a planet. Without life, worlds reach a certain equilibrium state governed by the simple physics of distance from a parent star, starting composition, and rational chemical and geologic processes.
The time has come to SETL: Search for Extraterrestrial Life.
But life as we know it (and, caveat again, we only have one example to work with here) just loves to throw everything out of balance. (No radio transmitters required.) The classic example on Earth is the presence of abundant oxygen in our atmosphere. Sure, oxygen is ridiculously common in the universe, and there’s plenty of it on Earth, bound to silicon to make rocks or carbon, for example. But loose oxygen is very unstable, and without life, there would be little to no oxygen in our planet’s atmosphere. There’s simply no physical or chemical or geological process that generates it in abundance and keeps replenishing it. But there is a biological process underway here: photosynthesis, which creates an atmosphere that is remarkably different than it would be without life.
Of course this method isn’t foolproof. Methane is also a common byproduct of life on Earth, created by decomposing organic matter—and billions of cow farts. There’s certainly more methane on the Earth than there should be (which is somewhat of a problem for us because it’s a greenhouse gas). Our neighbor planet Mars also has a curious slight abundance of methane in its paltry excuse of an atmosphere. The methane on Mars even undergoes seasonal variations, just like on Earth.
Nobody thinks that Martian cow farts are responsible. But, even after more than a decade of relatively close-range study, there is no consensus as to what exotic geochemical process underlies the Martian methane mystery.
So the search for life abroad in the cosmos likely won’t hinge on a single, eureka-like moment of discovery, but rather the slow and deliberate accumulation of evidence.
And that evidence will come, likely not in the form of radio waves, but light.
When we spot an alien world, we now have several methods of sampling its atmosphere from afar. Sometimes the planet will cross between our view and its parent star. When it does, we can see how the star’s light changes subtly as it filters through that planet’s atmosphere, providing spectral features, the fingerprints of various elements and molecules.
If a lucky alignment for visible light isn’t in the cards, we can search in the dark. Using a coronagraph to block the light of the parent star, researchers can observe the infrared radiation from the body of the planet itself or from starlight reflected off it. Either way, we get another way to taste its atmosphere.
Astronomers have already used this technique to find evidence of carbon dioxide and methane on other worlds. So far, however, there has been no smoke signal of life.
Nobody thinks that Martian cow farts are responsible for the methane on Mars.
While NASA’s ongoing Transiting Exoplanet Survey Satellite and shiny new James Webb Space Telescope can get us some early clues about where to focus our attention, we are going to need much bigger and more complex instruments to pick up a broader range of potential biosignatures.
To really capture life’s whiffs on another world, agencies around this world are developing and proposing slews of breathtakingly ambitious new equipment and missions.
NASA is embarking on the early phases of creating a new space-based telescope specifically to search for habitable worlds. That instrument, which has a target launch date in the 2040s, will share a design plan with the (hopefully then-outdated) James Webb. But it will also bring along a powerful coronagraph, which will allow the mega-observatory to examine especially promising exoplanets. Those readouts won’t be the bare sketches that we get today, but a much more complete census of the chemical mixture of those alien atmospheres.
The European Space Agency is in the midst of launching a trio of satellites to join the James Webb in its hunt for any life. Each of them will have slightly different but overlapping planet-hunting and planet-characterizing capabilities. The small CHEOPS mission, launched in 2019, is designed to accurately measure the diameter of exoplanets. Combined with knowledge of mass, we get the density, which immediately narrows down the range of possible compositions (and friendliness to life). PLATO, expected to launch in 2026, will identify up to 1 million exoplanets, with optics and instruments especially designed for hunting Earth-like worlds in Earth-like orbits around sun-like stars. Lastly, ARIEL, likely launching in 2029, will be a faithful companion to the James Webb, with similar capabilities but able to dedicate more time observing individual worlds.
Other nations, including China, have proposals floating around as well. The Chinese Space Agency recently released plans for a space-based observing platform containing a whopping seven telescopes to launch as early as 2026. Named Earth 2.0, that array of telescopes gives the observatory a huge advantage in field of view, allowing it to see planets around many more stars—as many as 1.2 million suns. Promising candidate worlds could then be probed for signs of life with a more focused analysis.
And if we do find life out there, quietly circling around another sun?
It’s impossible to say what the implications of such a momentous discovery would be. Surely a few prizes, including the Nobel, would be handed out. The conclusive demonstration of life arising elsewhere would both settle and raise scores of questions, some scientific, others philosophical. But science will have the opportunity to rise to the challenge, with the development and deployment of new generations of telescopes and observatories to better understand our newfound neighbors, however humble they may be.
And for me, that would be more than good enough.
Paul M. Sutter is a research professor in astrophysics at the Institute for Advanced Computational Science at Stony Brook University and a guest researcher at the Flatiron Institute in New York City. He is the author of Your Place in the Universe: Understanding our Big, Messy Existence.
Lead image: Skorzewiak / Shutterstock
When Gertrude Stein famously quipped that “we are always the same age inside,” she certainly wasn’t referring to the conglomerate of cells, carefully organized into tissues, that form a human body. We all understand that despite our best efforts to preserve youth, our material bodies inevitably age and fail us. Yet trying to understand whether all our body’s cells age at the same rate is not a trivial question in biology. Some of our tissues are built to last and degenerate slowly, like the soft spongy tissues that form the brain, while others, like a red blood cell, have a much shorter lifespan. It’s long been a central tenet of biology that aging results from the random accumulation of damage to specific cells over time.
But in the past decade, Steve Horvath, while a professor in human genetics and biostatistics at the University of California, Los Angeles, honed the thesis that aging in every tissue can be predicted by a single mathematical formula. In a 2022 paper, Horvath, together with an international group of scientists, identified this formula in 185 mammalian species.1 “I think it’s unbelievable that this is even possible,” Horvath told me. “But one formula can measure age in all species and all tissues.”
The formula implies that there is a universal clock to aging. As the years tick by, the risk of mortality increases. Tissues degenerate by accumulating wear and tear in the form of molecular damage, while the aging program depletes the body’s reserves of young stem cells to replace them. Some see this as evolution, a cruel mistress, having programmed us to die. But to Horvath, aging is unintentional. “I don’t want to say evolution selected a program that makes us die. I think it’s just that Mother Nature never selected against it,” he says.
If aging is a coding error in the system, can it be fixed?
If you have empathy for Dorian Grey, Oscar Wilde’s infamous protagonist who bartered his soul to not “grow old, and horrible, and dreadful,” you will agree this is a highly lamentable oversight. “Essentially, the program serves the purpose of development, perhaps protection against malignant transformation, and then later in life it does something bad,” Horvath says. Borrowing a phrase from João Pedro de Magalhães, an aging researcher at the University of Birmingham, he adds, “Aging is a coding error in the system.” Horvath sees this error as an emergent property, the result of cumulative subtle changes in cell identity and tissue composition, which together gradually compromise fitness and lead to a decline of organ function and the manifestation of physical aging.
Horvath is now synonymous with the term “biological age,” a tantalizing metric that refers to the youthfulness of individuals rather than their “chronological age,” as measured from the time of their birth. Markers of biological age can be interpreted as predictors of mortality. The hunt for accurate markers is a longstanding pursuit in the medical field and has included quantitative measurements, such as blood serum analyses of specific proteins or metabolites, as well as composite indices based on the more qualitative measurements of frailty and cognitive function. Most recently, scientists have looked to changes in DNA as the harbingers of aging. Reliable markers of biological age can reveal more about how those of us with an “accelerated” biological clock are aging too quickly. It’s not surprising that known factors that speed the process include lifestyle habits such as smoking and overeating.
Horvath’s “clock” has allowed scientists to “truly quantify aging,” says Vadim Gladyshev, a professor of medicine at Brigham and Women’s Hospital, Harvard Medical School, whose own research focuses on understanding the causative biological processes behind aging. “It was revolutionary. If you think about what was available to us 10 years ago, scientists were not able to quantify aging on the molecular level. They would quantify protein accumulation or telomere length or some kind of other functional feature, but these are individual measurements that are not really accurate. It was clear that the quantification was just not good enough, so you couldn’t draw many conclusions. When Steve developed the first clock, that’s when I knew this was the future. We followed in his steps.”
A mathematician by training, Horvath published his first clock theory in 2011, using an analysis of saliva samples collected from a study of twins, intended to shed light onto the origins of homosexuality. Horvath was part of the study along with his brother, with whom he shares an identical genome but different sexual orientation. The study collected information on the entire genome sequences of the participants, the expression patterns of all genes, and whether the DNA material had accumulated physical “epigenetic” changes, in other words, a sea of information. “We were a data point,” Horvath told me.
Without a penny of funding to invest in studying the possible biological predispositions for homosexuality, Horvath spent countless hours on weekends and in his free time analyzing the wealth of data from the study, motivated by a personal curiosity. But instead of finding a plausible causal link for sexual orientation, his statistical analyses found a startling correlation between certain physical modifications to the saliva’s DNA and the age of the study’s participants. “Honestly, the signal was so strong, you could use any statistical model and you would see it,” he told me. Nevertheless, the result was unexpected because no one was looking at the relationship between epigenetics and age, and his findings kicked off further investigations.
Evolution is a cruel mistress, having programmed us to die.
Epigenetics is the study of physical modifications to our genes that either prevent or enhance their expression. What Horvath was able to observe by narrowing in mathematically on 80 or so genes across the whole human genome, a mere sprinkling of the 20,000, was that the presence or absence of epigenetic marks significantly correlated with the chronological age of the 34 twin pairs from the study.2 The physical marks on the genes, specifically methylations to cytosines (the process by which a small chemical tag, a methyl group, is added to one of the four DNA bases that are used in the encryption of the genetic code), were the equivalent of molecular wrinkles. Horvath had cleverly come up with a statistical model that could be used to count them and date individuals. He called the formula, which calculates the epigenetic marks on the relevant genes, a methylation (or epigenetic) clock for estimating the age of a tissue sample. The result was striking, and the very first epigenetic clock could estimate the age of individuals within an accuracy of 5.2 years.
The advancement of clock theories represented a movement from simple correlation to a causal link between epigenetics and aging. A breakthrough in understanding came from Horvath’s discovery of the “pan-tissue clock.” This is the set of methylation markers that predict the biological age of an individual, not just in one type of tissue, such as saliva from the original study, but in all the various tissues that comprise an organism.
“When I developed the pan-tissue clock, that really garnered the attention of the aging community,” Horvath says. “A pan-tissue clock was paradoxical because methylation is supposed to control cell identity,” and remains fixed through adulthood. When Horvath and his colleagues established that epigenetic clocks counted time at the same pace across all tissues, whether that was quickly dividing blood cells or notoriously slow and highly differentiated brain neurons, the race was on to understand the fabric of time that the clocks are measuring. The universal clock, the key finding of Horvath’s 2022 paper, takes the pan-tissue clock one step further. It chimed the final stroke that unequivocally showed a predictable pattern to aging not only within the body of a single organism, but across mammals.
Horvath is excited about the “vampire idea” of anti-aging.
One big question that arises from Horvath’s research is whether DNA wrinkles are the cause or the effect of aging. Could they simply be akin to cosmetic changes to the body that bear no consequence on the ultimate outcome? After all, a clock simply measures time, it isn’t its driver.
“It is the critical question,” Horvath says. “The discussion we have today pertains to fourth-generation clocks. These are clocks that hopefully are comprised of cytosines that truly have a causative role in the aging process. But we are not there yet. We have some understanding that cytosines may play a causal role,” he stressed. Of particular interest are “enhancer” regions of the genome, which exaggerate the role of certain genes by activating them to exorbitant levels. A recent study of Alzheimer’s disease, for example, found that methylation losses in specific enhancer regions occur in normal aging neurons, but are accelerated in patients with the disease.3
Then there’s the billion-dollar question. If aging, as Horvath said, is a coding error, can it be fixed? Just now, Horvath is excited about an avenue of research he calls the “vampire idea.” In a recent paper, Horvath and colleagues presented the results of a clinical trial where concentrated umbilical cord plasma was injected into 18 audacious senior volunteers (60 to 95 years old), all deemed to be of “normal health for their ages.”4 The researchers measured the effect of the plasma treatment on the subjects’ aptly named GrimAge clock, an epigenetic predictor of “time to death” in humans.5 As part of the trial, participants were injected with 100 ml (roughly half a measuring cup) of concentrated umbilical cord plasma, blood that is collected and processed at the time of a baby’s birth, on a weekly basis for a total of 10 weeks. “Sure enough, GrimAge showed that the umbilical cord plasma rejuvenated the participants—by a small amount, but it was statistically significant,” Horvath says. Earlier studies had shown that transfusing young blood can dramatically reenergize elderly rats. “We found that when a young mouse had coupled circulation to an older mouse, the old mouse becomes younger and lives longer—it is a very striking result,” says Gladyshev, who has participated in the research.
Horvath recently left academia to join a biotechnology startup, Altos Labs, launched in 2022 with $3 billion in investments. Aging, he says, is accompanied by a breakdown of resilience in cells. COVID-19 is the most salient recent example. The virus affects the elderly more drastically because their cells lack the resiliency to fight back. The opportunity to focus on improving “cellular health and boosting resilience,” Horvath says, was the next logical step.
Elena Kazamia is a scientist and freelance journalist with a Ph.D. in plant sciences from the University of Cambridge. She is originally from Greece.
Lead image: Bluehatpictures / Shutterstock
1. Lu, A.T., et al. Universal DNA methylation age across mammalian tissues. bioRxiv (2022). Retrieved from DOI: 10.1101/2021/01.18.426733.
2. Bocklandt, S., et al. Epigenetic predictor of age. PLoS One (2011). Retrieved from DOI: 10.1371/journal.pone.0014821.
3. Li, P., et al. Epigenetic dysregulation of enhancers in neurons is associated with Alzheimer’s disease pathology and cognitive symptoms. Nature Communications 10, 2246 (2019).
4. Clement, J., et al. Umbilical cord plasma concentrate has beneficial effects on DNA methylation GrimAge and human clinical biomarkers. Aging Cell 21, e13696 (2022).
5. McCrory, C., et al. GrimAge outperforms other epigenetic clocks in the prediction of age-related clinical phenotypes and all-cause mortality. Journal of Gerontology, Series A, Biological Sciences and Medical Sciences 76, 741-749 (2021).
One question for Thomas Nicholas, a computational plasma physicist and former fusion researcher who now studies climate science at Columbia University. He was the lead author of the 2021 paper, “Re-examining the role of nuclear fusion in a renewables-based energy mix.”
When will fusion energy light our homes?
I was pleased to see recently that researchers at the Lawrence Livermore Lab had achieved the goal of fusion ignition. But I felt that the reaction was somewhat disproportionate. A piece of context that a lot of people missed is that the multi-billion dollar National Ignition Facility was designed to achieve that point from the start—that’s why “ignition” is in the name of it. All these breathless headlines about whether this accelerates the timeline to fusion seem ironic in the context of it actually being a decade behind what was promised when the thing was built in 2009. When fusion energy might light our homes depends on whether you mean: timeline to first power-producing device, or to significant deployment in the world. Those are two very different things.
In the case of fusion—because it’s a large infrastructure project with big capital costs, and you would be building parts that have a long lifetime—its scale-up rate would be very slow in a reasonable economic projection of how you would try to make it profitable. It’s the opposite of solar panels and wind turbines, and so on, where you can build them quickly. They have a short half-life—they stop working relatively quickly compared to big concrete nuclear reactors. But that means their replacement rate is very high. The replacement rate sets how quickly you can scale up in the first place. If you want to finance building factories for nuclear power stations effectively, then you are only going to finance the number that can meet your eventual replacement rate once you’ve hit market saturation.
The longer fusion ventures wait to get to market, the lower the cost of renewable energy will be.
Assets with a shorter lifetime are easier to scale up quickly because there’s more reason to build more factories for them. The implication for fusion is that the difference between the first reactor that produces energy, and hundreds of reactors across the world that produce a non-negligible amount of power on grids, is about 50 years in any reasonable estimate. It’s not like once you build the first one, you can just suddenly build 100 more immediately, because that doesn’t make sense on multiple different levels, including economics. To some extent, commercial fusion ventures are running against the clock in that, the longer they wait to get to market, the lower the cost of renewable energy will be by the time they get there. But that doesn’t mean that there’s one point in time at which they’ll become totally irrelevant, because they’re not in the same category of production. The renewables are intermittent. One of the advantages of fusion is that you can turn it on and off when you like.
The relevant question to be asking is: How much will we need to rely on fusion once it’s ready, given our future reliance on renewable energy? Renewable energy is getting much, much cheaper. It’s expanding very rapidly. All projections, even from economically conservative groups, suggest that the fact that the prices have dropped so far means that we will continue to scale them up. Climate change on top of that is an additional incentive to try our hardest to scale them out as fast as possible. The role for fusion would not be, “Oh, we’ve invented it. Let’s not bother building more wind turbines.” It would be, “We’ve invented it. Is this a better way than our alternative options to fill in the gaps and supply the last bit?”
It’s pretty clear that we’re going to have quite a lot of renewables by 2100. Let’s say, on average, 70 percent of your electricity was generated through wind and solar. Then the question becomes: What is the other 30 percent, and how would fusion fit in? Let’s say that you built your first serious prototype fusion plant in 2040 or 2050, which in itself is just a very uncertain guess. Then you’ve got to scale out your plants one by one, and you’re expecting them to have a commercial lifetime measured in many decades. We have nuclear fission plants whose lifetime has been extended so that we expect them to be operating for almost 100 years. If you ask, “How long would it take at that rate of scale-up to get 10 or 20 percent of a country’s electricity supply from fusion?” then you’re talking about the year 2100.
Lead image: Ezume Images / Shutterstock
The Jackson Wild Media Lab offers a fellowship each year to media creators to hone their skills in furthering science and conservation communication. The nine-day fellowship is highly competitive—2022’s 16 participants were chosen from a pool of 350 applicants, including Brazilian native Laura Pennafort and Tennessean Johnny Holder. Jackson Wild pairs scientists with filmmakers who dive right into conceiving and executing a project within a short time frame (four days). In addition to a crash course in production using professional grade tools and equipment, media fellows are exposed to input and advice from seasoned filmmakers and other experts.
Pennafort began her academic studies in biology, eventually doubling up to study filmmaking. She completed her education in England, where she has lived for five years. Increasingly focused on wildlife documentaries, Pennafort finds in them a perfect merging of her passions for science and nature.
I have a deep impression now of a whole world I’ve been neglecting.
Holder’s life experience includes eight years in Army Special Operations, an earth science and geography undergraduate degree, and “by serendipity,” enrollment in Montana State University’s Natural History Filmmaking program. Holder’s thesis film for Montana State, “Sonora,” won the 2022 special jury award for Diversity and Inclusion in the Jackson Wild Media Awards (it was also an overall finalist). During his time as a media fellow, Holder and his colleagues made “Sound of the Lake” focused on bird life in Europe’s second-largest reed bed.
Nautilus caught up with Pennafort and Holder to talk about their films, their passions, and their experience with Jackson Wild.
You are originally from Brazil, and now live in England. Do these locations influence your filmmaking?
I’ve been in England for five years. I moved here because things were getting uncertain in Brazil and I didn’t know if I could finish my education there. I studied biology and then combined it with filmmaking. Wildlife documentaries combine these passions for me, and England is a great place to pursue this work. Blue-chip documentaries are often produced here. These films show nature as mostly untouched, animals doing beautiful things. It’s a bit of a fantasy, but it serves a real purpose, to engage the viewer.
At the same time, I am committed to sharing the amazing biodiversity of Brazil with the rest of the world. People don’t realize what’s there and how threatened it is—the situation calls for a more straightforward call to action.
Part of your fellowship entailed making a short film with several of your peers in four days. You made “The Bird Ringer,” about an ornithologist who monitors birds to better understand and protect them. How did you coordinate so quickly to develop the narrative?
It was challenging. We interviewed Flora about her work, and the whole time I kept thinking: I don’t see a conflict or a storyline in what she is telling us. We filmed her and studied the transcript of her talk to try to structure the soundbites.
This kind of activism is another way to engage people to care about wildlife.
Flora sets mist nets to capture birds, which she then measures and weighs, checking their overall health. Then she places a metal ring around the bird’s leg. In the event she captures the bird again, she can compare notes to evaluate how well it’s doing.
Yes, so we filmed her at work, and different habitats in the park. One incredibly useful dimension of the fellowship was that we were able to show a rough cut of the film to a team of industry professionals before we finalized it. They helped us streamline our presentation of the material. We had jumped back and forth between Flora in the lab and outside. They told us this was disorienting. So we reflowed the scenes, staying inside longer, staying outside longer.
Was it easy enough to agree on how to do that?
Yes, we worked it out. We also got great assistance with the ending. We saw three different ways to conclude the film: to emphasize her work with kids, to highlight her research, and the ending we chose, which focuses on how, in Flora’s words, “you only protect” what you know. I wanted to keep all three points in the film, but understood that for it to work, the film needed to have one direction.
Prior to the fellowship, you made a film about the Pantanal, Brazil’s extraordinary tropical wetland, home to incomparable biodiversity.
The Pantanal experienced a very severe drought in 2020, right as the pandemic started. The wetlands naturally go through dry cycles and flooding, but this was extreme. A wildfire took over the whole ecosystem. I learned about a group of people rescuing animals burned in the fire—jaguar, tapir, all kinds of species were hobbled by their injuries. I interviewed people rescuing and rehabilitating the animals—I wanted to tell their story. These were ordinary people coming to help; a tour guide who usually takes people to see the animals was confronted with everything on fire. She dedicated herself to stopping the fire and rescuing the animals. This kind of activism is another way to engage people to care about wildlife.
How did your experience at Montana State influence your filmmaking?
They told us at the beginning: you are filmmakers first, scientists second. Although I have a strong leaning toward wildlife, I found myself drawn to human-based stories. I made a lot of off-the-wall projects, about artists working in maximum security prisons, and clowns performing in refugee camps. I went to Columbia, South America, to do a follow up project on the clowns, and pivoted to ex-FARC soldiers doing species rehabilitation in the Amazon. [FARC refers to the Revolutionary Armed Forces of Columbia, a Marxist guerilla group involved in that country’s long and complex internal conflicts.] I wanted to do my thesis on the subject but it was far outside the scope of what I could accomplish at the time. I came across this group of FARC birders, who had spent some time with Juan Pablo Culasso, the blind birder who became the subject of “Sonora.”
“Sonora” focuses on an expert birder who knows his subject mostly by ear. He is completely blind and has been since birth. Watching the film is an unusual experience, because you are focusing on the audio experience he moderates, as much as the visual. You understand you are seeing what he can’t see, but his hearing is far superior. It feels like he is much closer to the birds.
It was easy to show his character—he’s so strong. All the narration came from a talking-head interview—we didn’t use that footage. He is so poetic, he speaks like he is reading something complete and practiced, but that’s just the way he talks. He started birding when he was quite young, 6 or 7. His uncle gave him a field recorder. When you put on headphones with a special microphone, even for a non-blind person it opens up another world.
And he’s making amazing contributions to what we know about birds.
He estimates he has identified about 1,100 bird species, each with three to five different calls. So he estimates he knows about 3,000 to 5,000 calls. He has been building sound catalogs and uses a Spotify account—he’s very diligent about that.
It’s interesting that your short film as a media fellow also focuses on sound.
Well, we were assigned a scientist and had four days, as you know, to make the film. Part of the challenge was to find one narrative to characterize the long-time work of Erwin Nemeth, who has spent decades observing the reed belt at Lake Neusiedle—he knows so much about every aspect of its history and life. He was excited to focus on the soundscape, just one aspect of the reed belt, which is one of the last of its kind. Most reed belts in Europe have been lost to development, and they are critical habitat for migrating birds. He was one interesting cat.
Both experiences must have taught you a lot about birds.
Yes, I have a deep impression now of a whole world I’ve been neglecting and not appreciating fully. I hear birds so much more now. I learned about the health of an ecosystem and how birds contribute to it, what monocultures do to bird life.
But the work also left an emotional and spiritual impression. On my last mission in the military, I experienced injuries and got Lyme disease. I underwent a slew of events that forever altered my path in life. My athleticism, my ability to bounce back and go hard, these felt taken away from me. From Juan Carlos especially, I learned to sit in silence. I learned how to listen and heal in that sonic world.
Lead image: NPNeusiedlerSee / YouTube
A mother gives her baby her all: love, hugs, kisses … and a sturdy army of bacteria.
These simple cells, which journey from mother to baby at birth and in the months of intimate contact that follow, form the first seeds of the child’s microbiome—the evolving community of symbiotic microorganisms tied to the body’s healthy functioning. Researchers at the Broad Institute of the Massachusetts Institute of Technology and Harvard University recently conducted the first large-scale survey of how the microbiomes of a mother and her infant coevolve during the first year of life. Their new study, published in Cell in December found that these maternal contributions aren’t limited to complete cells. Small snippets of DNA called mobile genetic elements hop from the mother’s bacteria to the baby’s bacteria, even months after birth.
This manner of transfer, which has never been seen before in the cultivation of an infant’s microbiome, could play a crucial role in promoting growth and development. Understanding how a child’s microbiome evolves could explain why some children are predisposed to certain diseases more than others, said Victoria Carr, a principal bioinformatician at the Wellcome Sanger Institute who was not part of the study.
“It’s a big question: How do we get our microbes?” said Nicola Segata, a professor at the University of Trento in Italy who was also not part of the study.
Our bodies are home to about as many bacterial cells as human cells, and most of them live inside our guts. Each of us harbors massively diverse libraries of bacterial species and strains acquired throughout life. But babies start out almost sterile. The first major infusion of microbes is thought to come from the mother during birth as the infant exits the womb. That bacterial gift creates the scaffolding for a thriving microbial community in the body that sustains us for the rest of our lives. (Infants born by cesarean section don’t get the same initial infusion of microbes that babies get from vaginal birth, but they slowly gather them later.)
One of the microbiome’s effects, Segata explained, is to condition its host’s immune system and metabolism during the first couple of years of life. These initial training days “can have long-lasting consequences that are right now still difficult to comprehend,” he said.
That’s because the metabolites, or chemical products of metabolism, made by the microbiome are thought to influence a baby’s cognitive and immune system development, particularly during a sensitive period in the 1,000 days before and after birth, said Karolina Jabbar, an internist and researcher at the University of Gothenburg who is a co-lead author on the new paper.
In the new study, led by Ramnik Xavier, the director of the Klarman Cell Observatory at the Broad Institute, the researchers collected stool samples from 70 pairs of mothers and their babies, starting early in pregnancy and continuing for the baby’s first year. The researchers then surveyed the mix of microbes and compounds present in the samples and ran genetic analyses to determine which species and which strains of microbes were present. With this data, they could see how the microbiomes of the mothers and babies coevolved during that time.
As they expected, the infants’ microbiomes were different from their mothers’, and the influence of diet on their microbiomes was clear. The infants had hundreds of metabolites that their mothers didn’t.
The big surprise for the team was that even when a baby lacked useful bacterial strains present in the mother, the baby’s microbiome still had snippets of genes belonging to those strains.
“How could the species influence the infant microbial composition without even being part of it?” Jabbar said. She and her lab mates started to wonder if this could be explained by horizontal gene transfer, a quirky process in which genes from one species hop to another species instead of being passed down to an offspring. Horizontal gene transfers are common within communities of bacteria—they contribute greatly to the spread of antibiotic-resistant genes in a variety of pathogens, for example—and they’ve also been found to occur in multicellular organisms.
Still, the researchers weren’t prepared to see hundreds of genes hopping between bacterial communities—from the mother’s microbiome to the baby’s. “It’s one of those things that you don’t at first believe yourself,” said Tommi Vatanen, who is a research fellow at the University of Helsinki and co-lead author on the paper.
The researchers speculate that horizontal gene transfers may be most obvious when bacteria that thrive in the mother’s gut can’t survive in the unfamiliar environment of the infant’s gut. Maternal bacteria may enter the infant’s body through breast milk or as released spores that the infant swallows. Some bacteria will inevitably fail to colonize the child’s body and disappear. But they might last long enough for certain gene sequences to hop into more successful bacteria. If those genetic sequences take root in the genomes of bacteria inside the baby’s gut, they can bring over the functions they encode.
“The fact that even a transient existence of a donor cell can have such an impact to those persistent ones is really fascinating,” Carr said.
In some cases, these hops may have been made possible by prophages—dormant viruses that replicate in bacteria. In the stressful environment of the baby’s gut, prophages may become active and start moving between bacteria, carrying embedded bacterial genes with them.
In their analysis of infant stool samples, Vatanen, Jabbar, and their colleagues identified an apparent example: A prophage that was integrated into the DNA of one bacterial species showed up in a different bacterium months later.
“It’s quite convincing evidence that this particular phage had jumped between two different species,” Vatanen said. The researchers also found that genes hopped between bacterial species in other ways, such as through direct cell-to-cell contact or through a bacterial cell engulfing DNA released into the environment.
One big group of genes that jumped encoded the cellular machinery that makes horizontal gene transfers possible. Other mobile sequences helped with carbohydrate and amino acid metabolism, and may therefore have greatly benefited the bacteria. For example, the results suggest that genes related to the digestion of carbohydrates found in breast milk might be shared from mothers to infants in this way, Jabbar said. The researchers don’t know for certain that the horizontal transfers benefit the baby directly, but by assembling a more capable gut microbiome, they may help with the development of the baby’s immune system.
Some of these genetic sequences turned up in new bacteria months after birth, which suggests that the transfers continued to occur during that time. It’s not clear whether transfers were happening before birth as well, but the researchers did find that the mother’s microbiome evolved during pregnancy. Some of the changes seemed likely to affect the body’s ability to tolerate glucose. Those findings suggest that the diabetes some people develop while pregnant could be linked to the microbiome.
When the researchers collected stool samples from the infants, they also took samples of their immune cells. Now they are planning to use those samples to examine how the bacteria that infants carry, including those bacteria that contain these mobile elements, interact with immune cells. Insights from these experiments could lead to a better understanding of how and why some people develop allergies or autoimmune diseases.
The existence of such mobile elements has been known since the pioneering geneticist Barbara McClintock discovered them in the 1940s, an achievement for which she won the Nobel Prize. “But it’s never really been characterized to such depth until recently,” Carr said. “Now that we’re getting more insights, we’re realizing that actually, mobile genetic elements are having a bigger impact than we previously realized.”
In us, it turns out, that impact starts very early in life.
Lead image: Through transfers of bacteria and genes that continue for many months after birth, a mother may nurture the healthy development of an infant. Credit: Kristina Armitage / Quanta Magazine.
The post Mobile Genes From the Mother Shape the Baby’s Microbiome appeared first on Nautilus.
Local Honduran fishers mostly avoid fishing in Tela Bay on the country’s Caribbean coastline. Nonetheless, they have a name for the shapes and forms on the seafloor that waft in and out of view with the shifting glint of the sun. They call them “rocas” or rocks.
Just over a decade ago, Antal and Alejandra Börcsök, newly-trained divers, heard about the rocas and, curiosity piqued, donned their scuba gear to explore. On the seafloor, rather than inorganic geologic forms, Antal and Alejandra discovered rocks that were very much alive. Everywhere they looked they saw growing, thriving coral.
The Börcsöks knew that Caribbean coral were plagued by disease, bleaching, and death. Yet as novice divers, they hadn’t seen enough to judge Tela’s coral. So, they invited friends who were active in coral monitoring to have a look.
Diseases that have ravaged other Caribbean reefs are apparently absent from Tela Bay.
Back on the surface, Antal recounts how their friends gushed, “That is the greatest reef we’ve ever visited! Is there more like that?” Now, having dived throughout more of Tela Bay than anyone, Antal can say that there is. In fact, there’s a lot more reef like that.
But why so much healthy coral exists is mysterious. “We should have nothing in Tela,” Antal says. “Everything that’s bad, we do in Tela.”
That no one seems to have looked beneath the surface of Tela Bay before the Börcsöks did is probably because it’s such an unlikely spot for a thriving reef. About 10 kilometers west of Tela Bay, the Ulúa River, Honduras’ largest, empties into the Caribbean. It is loaded with sediments, which are typically problematic for coral. Sediments cloud sunlight required for photosynthesis by algae that live inside the coral’s tissues and supply as much as 90 percent of their nutrition. Sediments can also physically smother reefs.
“Not only that,” says Antal, “this is the place where the banana republic started.” In 1913 the United Fruit Company, which later became Chiquita, received concessions from the Honduran government to operate a rail line into the city of Tela as well as 162,000 hectares of land for banana plantations. Today the remains of the 1,000-foot wharf where boatloads of bananas were exported still rise above the surface of the water, but the banana trees have largely been replaced by African oil palms and those plantations have expanded.
Tela receives more than a meter of rainfall a year; as it runs into the bay it brings fertilizers from the plantations with it. Compounding the agricultural runoff is waste from Tela’s roughly 100,000 inhabitants. The city has no sanitation system except for pipes that run directly into the bay.
Corals evolved to live in the sea’s deserts, places where organic molecules like those in fertilizers and sewage are nearly absent. When exposed to elevated concentrations of these nitrogen-rich compounds, they often sicken.
Yet after more than a century of inundation by sediments, agricultural runoff, and sewage, the corals in Tela are unaccountably thriving. The reason the fishers avoid the reef is because it is so abundant and complex that small fish can hide from predation. Big fish don’t bother hunting there, and neither do the fishers.
Discovering the reef galvanized Antal and Alejandra. They started a project to protect it, and within two years saw the passage of a local law to do so. Their company, Tela Marine, partnered with an English tour operator, Project Wallacea, which helps graduate students develop field projects.
Dan Exton, the head of research at Operation Wallacea, recalls standing on the beach in Tela with Antal for the first time and thinking there couldn’t possibly be a coral reef beneath the murky water. “I almost cancelled the dive,” he said. But as soon as he descended, Exton saw “mind-blowing coral. I’d never seen a reef like that. Everywhere you looked, something unusual was happening.”
Since then, Exton has overseen the work of more than 500 students in Tela Bay; their findings confirm the unusual richness of the reef. Whereas at the nearby island of Utila, coral cover—the proportion of a reef’s surface where healthy coral grows—hovers around 20 percent, in Tela it remains more than threefold greater.
Elkhorn and staghorn coral species that are critically endangered in the rest of the Caribbean grow in rich thickets along the bay’s shores. Mountainous star coral, another endangered species, grows in massive plated colonies as big as backyard sheds. Lettuce corals unfurl in long, rich carpets. Their blades form tiny three-dimensional apartments for shrimp, snails, clams, worms, and tiny sea stars, and provide spaces where small fish can hide from predators.
“As far as we know, there isn’t any other reef in the world that looks like this.”
One important observation is that diseases that have ravaged other Caribbean reefs are apparently absent from Tela Bay.
Since 2014, stony coral tissue loss disease has decimated reefs throughout the Caribbean, melting more than 20 species of brain, maze, and pillar coral tissue like hot wax. These species are found in Tela Bay, yet no one has seen the disease there.
Here and there, the pick-up-sticks spines of sea urchins wave curiously from within crevasses. These clementine-sized urchins are critical to reef health, grazing algae that can easily overgrow coral. In the 1980s an epidemic wiped out urchins throughout the Caribbean, and they have never rebounded. In Tela Bay, the numbers of urchins remain at pre-pandemic levels, roughly 100 times more abundant than elsewhere in the region.
Another outlier are giant barrel sponges. On nearby Roatan, divers used to pose for pictures inside millennium-old sponges so big that the dive spot was referred to as “Texas,” because everything is so big in Texas. But in 2018, an affliction called orange band disease killed the ancient organisms in just four months. In Tela Bay, barrel sponges were unaffected.
One more threat facing reefs is heat. As Earth warms, half of all coral reefs are thought to have already succumbed to bleaching, in which a coral’s symbiotic algae departs the partnership, leaving the coral bereft of color and nutrition. Bleaching is caused by warming waters. Tela Bay’s reefs, however, have handled the heat.
Anne Cohen, a marine biologist at Woods Hole Oceanographic Institution who searches out heat-tolerant reefs, performed a preliminary estimation of heat stress on Tela Bay’s corals for this article. Her team found that, although sea surface temperatures reached 31 degrees Celsius—hot enough to blanch any reef in Florida—there had been comparatively few episodes of sudden warming, which are especially conducive to bleaching.
As a result, her lab’s models suggest that bleaching would only have been expected once, in 2017. “It just hasn’t gotten hot enough there,” Cohen says. That jives with Antal’s observations. He rarely sees bleaching in Tela.
In 2018, armed with reef survey data, and working with local NGOs and the Ministry of Agriculture, Tela Marine shepherded an act through Congress establishing the first Marine Wildlife Refuge in Honduras, strengthening protections for Tela Bay from the local to national level.
Scientists searching for healthy coral might have looked in the wrong places.
But just when the future seemed assured, a Chinese company proposed developing an iron mining operation along the Ulúa river. It had the potential to dump heavy metals toxic to marine life into the bay. “That was going to kill the reef, basically in a year,” Antal said.
Ultimately, the mining operation was halted, in part because of public testimony against it, but the threat showed just how little stood between the reef’s survival and economic forces. Even an act of Congress was a flimsy line of defense.
“We realized that the biggest problem was that nobody knew there was a reef there, right?” Antal points out. “So how do we take people to the reef?”
In a place where a small fraction of the population dives, the answer was to bring the reef to them. Less than a year ago, Tela Marine opened the only public aquarium in Central America. Twelve thousand visitors a month already pour through the aquarium doors, which puts it on track to be one of the largest attractions in the country.
An intentional part of the draw is the price of admission: free, except for an eight-minute speech from Antal or one of the aquarists on why the colorful corals, creeping seastars, spiny urchins, and darting fish they are about to see are such a treasure.
While the Börcsöks work to protect the reef, questions remain about what makes it so healthy. Is there something about the bay that protects its corals from bleaching? Have the corals adapted to a century of runoff? How do they thrive with so much sediment? Is there something special about their symbiotic algae? What prevents diseases from spreading in Tela when they rampage through the rest of the Caribbean? Are the coral, urchins, or sponges genetically different? Most importantly, can this reef continue to survive?
Currently, answers are unknown. Like the public, few scientists are aware of the reef’s existence. Aside from Operation Wallacea, little scientific attention has been paid to the reef. Research has largely involved observation and monitoring, although plans for more detailed studies are now in the works.
Even before those questions are answered, when Antal stands before a crowd of enthusiastic aquarium visitors he can already say, “We have something that we can be proud of in Honduras. This reef is unique. As far as we know, there isn’t any other reef in the world that looks like this.”
So far, that is. Dan Exton notes that the implications of finding the reef stretch well beyond Tela Bay. “It can’t be the only one out there that’s like it,” he says. Exton suspects that scientists searching for healthy coral might have looked in the wrong places as seas shift to warmer, more polluted conditions.
“If you were to look at other turbid, cloudy, impacted bays around the Caribbean, you may well find other healthy reefs,” says Exton.
To him that’s a reason for optimism. ”We get so bogged down in coral reef science by the idea that, in 50 years’ time, corals won’t exist anymore,” Exton continues. “I think there’s a lot more hope for reefs than we give them credit for sometimes. For me, my personal hope comes from Tela Bay.”
Lead image: Tela Bay, Honduras is hot, polluted, and the last place anyone expected to find a thriving coral reef — but species dying out elsewhere in the Caribbean have continued to flourish in its waters. Photo by Antal Börcsök
Computers and information technologies were once hailed as a revolution in education. Their benefits are undeniable. They can provide students with far more information than a mere textbook. They can make educational resources more flexible, tailored to individual needs, and they can render interactions between students, parents, and teachers fast and convenient. And what would schools have done during the pandemic lockdowns without video conferencing?
The advent of AI chatbots and large language models such as OpenAI’s ChatGPT, launched last November, create even more new opportunities. They can give students practice questions and answers as well as feedback, and assess their work, lightening the load on teachers. Their interactive nature is more motivating to students than the imprecise and often confusing information dumps elicited by Google searches, and they can address specific questions.
The algorithm has no sense that “love” and “embrace” are semantically related.
But large language models “should worry English teachers,” too, Jennifer Rowsell, professor of digital literacy at the University of Sheffield in England, tells me. ChatGPT can write a decent essay for the lazy student. It doesn’t just find and copy an essay from the web, but constructs it de novo—and, if you wish, will give you another, and another, until you’re happy with it. Some teachers admit that the results are often good enough to get a strong grade. In a New York Times article, one university professor attests to having caught a student who produced a philosophy essay this way—the best in the class. High school humanities teacher Daniel Herman writes in The Atlantic that, “My life—and the lives of thousands of other teachers and professors, tutors and administrators—is about to drastically change.” He thinks that ChatGPT will exact a “heavy toll” on the current system of education.
Schools are already fighting a rearguard action. New York City’s education department plans to ban ChatGPT in its public schools, although that won’t stop students using it at home. Dan Lewer, a social studies teacher at a public school, suggests on TikTok that teachers require students submitting from home also to provide a short video that restates their thesis and evidence: This, Lewer says, should ensure “that they are really learning the material, not just finding something online and turning it in.”
OpenAI’s researchers are themselves working on schemes to “watermark” ChatGPT’s output; for example, by having it select words with a hidden statistical signature. How easy it will be for harried teachers to check a class-load of essays this way is less clear—and the method might be undermined by “translating” ChatGPT’s output by running it through some other language-learning software. In a digital arms race of cheating student against teacher, I wouldn’t count on the teacher being able to stay ahead of the game. Some are already concluding that the out-of-class essay assignment is now dead.
But the challenge posed by ChatGPT isn’t just about catching cheats. The fact that AI can now produce better essays than many students without a smidgeon of genuine thought or understanding ought to prompt some reflection of what it is that we aim to teach in the first place. Herman says it is no longer obvious to him that teenagers will need to develop the basic skill of writing well—in which case, he says, the question for the teacher becomes, “Is this still worth doing?”
Without wishing to detract from the impressive algorithmic engineering, ChatGPT is able to generate more-than-passable essays not because it is so clever but because the route to its goal is so well-defined. Is that a good thing in the first place? Many education experts have long felt a need change the way English is taught, says Rowsell, but she admits that “teachers are finding it very difficult to make a substantial change from what we’ve known so long. We simply don’t know how to teach otherwise.”
Language AI might force matters to a head. “Science has leapt ahead, and we [in literacy education] don’t know quite how to grapple with it,” says Rowsell. “But we’ve learned that we don’t fight this stuff—let’s understand it and work with it. There’s no reason why classical essay-writing and ChatGPT can’t work together. Maybe this will catalyse the change we have to make within teaching.” If so, it should start with a consideration of what language is for.
Algorithms that can interact using natural language have been around since the earliest days of AI. Computer scientist Joseph Weizenbaum of MIT recounted how, in the 1960s, one of his colleagues ended up in an angry remote exchange with such a program called Eliza, thinking he was in fact conversing with Weizenbaum himself in a particularly perverse mood.
That such a crude language program as Eliza could fool a user reveals how innately inclined we are to attribute mind where it does not reside. Even until recently, language-using algorithms tended to deliver little more than awkward phrases full of non-sequiturs and solecisms. But advances in the technology—the exponential growth in computational power, the appearance of “deep-learning” methods in the mid-2010s, an ever-expanding body of online data to mine—have now produced systems with near-flawless syntax that supply a spooky simulacrum of intelligence. ChatGPT has been hailed as a game changer. It can perform all manner of parlour tricks, like blending sources and genres: a recipe for fish pie in the style of the King James Bible, or a limerick about Albert Einstein and Niels Bohr’s arguments over quantum mechanics.
There are evident dangers in these large language model technologies. We have little experience in dealing with an information resource that so powerfully mimics thought while possessing none. The algorithm’s blandly authoritative tone can be harnessed for almost any use or abuse, auto-generating superficially persuasive screeds of bullshit (or rather, repackaging in an apparently “objective” and better-mannered format the ones that humans have generated).
If ChatGPT churns out a credible essay, it’s because schools set an equivalent task.
The algorithm is ultimately still rather “lazy,” too. When I gave it the essay assignment of summarizing the plot of Frankenstein from a feminist perspective, ChatGPT did not seem to have consulted the vast body of scholarship on that issue, but instead provided a series of stilted and clichéd tropes barely connected to the novel: It “can be seen as a commentary on the patriarchal society of the time,” and “the ways in which women are often judged and valued based on their appearance.” It would probably say the same about Pride and Prejudice.
Some of these flaws might be avoided by giving the large language models better prompts—although to get a really sharp, informed response, the user might need to know so much that writing the essay would pose no challenge anyway. But at root the shortcomings here reflect the fact that the algorithm is not tracking down the feminist literature on Frankenstein at all, but is simply seeking the words and phrases that have the highest probability of being associated with those in my prompt. Like all deep-learning AI, it mines the available database (here, texts “scraped” from online sources) for correlations and patterns. If you ask it for a love sonnet, you are likely to get more words like “forever,” “heart,” and “embrace” than “screwdriver” (unless you ask for a sonnet about a screwdriver, and yes, I admit I just did that; the result wasn’t pretty). The algorithm has no sense that “love” and “embrace” are semantically related. What is so impressive about ChatGPT is how it is able not just to smooth out the syntax in these word associations but also to create context. I think it’s unlikely that, “You are the screw that keeps me firmly braced,” is a line that has ever appeared in a human-composed sonnet (and for good reason)—but still I’m impressed that you can see what it is driving at. (ChatGPT can do puns, too, although I take the blame for that one.)
For factual prompts, the texts that emerge from this probabilistic melange generally express consensus truths. This guarantees the exclusion of any particularly controversial viewpoints, because by definition they will be controversial only if they are not the consensus. Asking a large language model who killed JFK will deliver Lee Harvey Oswald. But ChatGPT occasionally invents random falsehoods, too. In its biography of me, it knocked two years off my age (I’m OK with that) and gave me a fictitious earlier career in the pharmaceutical industry (just weird).
The question for the teacher becomes, “Is this still worth doing?”
If ChatGPT churns out a credible school essay, it’s because we are setting school pupils an equivalent task: Give us the generally agreed facts, with some semblance of coherence. Student essay-writing already tends to be formulaic to an almost algorithmic degree, codified in acronyms: Point-Evidence-Explanation-Link, Point-Evidence-Analysis-Context. Not only are students told exactly what to put where, but they risk having marks deducted if they deviate from the template. Current education rewards an ability to argue along predictable lines. There’s some logic in that, much as art students are sent off to galleries to draw and paint copies of great works, learning the skills without the burdensome and unrealistic demand of having to be original.
But is this all we want? Rowsell says it is hard for teachers or educationalists even to confront the question, since the cogs of education systems typically continue to turn merely on the basis that “this is how we’ve always done it.” A deep dive into the teaching of cursive handwriting, for example, reveals that there is no clear justification for it beyond tradition. But maybe now’s the right time to be asking this difficult question.
My instinct as an occasional educator—in the 1980s I used to teach computer classes in a prison, and I homeschooled my children for a time—is that of the scientist, I suppose: to start with an explanation of how a thing works, starting with its parts. “You won’t use it well unless you understand it first!” says my inner voice. But while I listed off unfamiliar computer components to my incarcerated pupils, one finally chimed in, “I don’t even know how to turn it on!”
I appreciate that sometimes schoolchildren must feel toward language learning somewhat as my prison class did to computers, being told the unfamiliar names of the components (noun, embedded clause, fronted adverbial; CPU, RAM, bits and bytes) while thinking, “But I just want to know how to use it!” But ChatGPT seems to make all that groundwork otiose, much as your average computer user needs to know nothing of coding or shift registers. English writing? There’s an app for that. High-school humanities teacher Herman worries that students might be inclined to depend on language AI for all their writing needs, just as many will now consider it a waste of time to learn a foreign language in the age of real-time fluent translation AI.
I realize now that I was teaching prisoners about the inner workings of computers because I loved all that stuff. I suspect few if any of them loved it, too—but they rightly discerned that an ability to use a computer was going to become important in their life. I want my kids to love the way language works and what it can do, too. But they need first to know how to turn it on, so to speak. And the on-switch for language is not the distinction between nouns and verbs or the placement of subclauses, but the more fundamental process of communicating.
When we learn good use of language, we are not simply being trained to conform to a model. Those templates for sentence or essay construction do not follow some law of literature demanding particular arrangements of words, phrases, and arguments. However crude and formulaic they might be, ultimately they exist because they benefit the reader. In other words, reading is a cognitive process with its own dynamics, which the writer can facilitate or hinder. Sometimes all it takes to transform a confusing sentence into a lucid one is the movement of a single word to where it matches the cognitive processing of the reader. There is nothing esoteric about this; good communication is a skill that can be learned like punctuation. (And punctuation itself exists to aid good communication.)
The on-switch for language is not the distinction between nouns and verbs or the placement of subclauses.
What all this demands is empathy: an ability of the writer to step outside their head and into the reader’s. For factual text, the goal is usually clarity of expression. For fiction, the priority might be elsewhere: even, indeed, to impede instant understanding, not arbitrarily or perversely but in order to deliver a little jolt of surprise and delight when the meaning crystallizes in the reader’s mind. Music does the same with melody and rhythm, and this is partly how it moves and excites us. Shakespeare is famous for his sentence-inversions: a rearrangement of the usual order (“But soft, what light through yonder window breaks?”) to arrest the mind for a moment, perhaps for emphasis, perhaps for the sheer thrill of the lexical puzzle. On the larger scale, much of fiction is the art of controlled disclosure, of revealing information at the right moment and not sooner (or later).
Language is, at its heart, a link between minds. One theory of the origin of language, proposed by linguist Daniel Dor, argues that it arose not for simple communication but for “the instruction of the imagination.” It enabled us to move beyond the blank imperatives and warnings of animal vocalizations, and project the contents of one mind into another.
What does this imply for language AI? Precisely because the algorithm of a large language model does not have a communicative goal—it has no notion at all of what communication is, or of having an audience—it does not show us what language can do, and indeed was invented to do. It would iron out Shakespeare’s quirky acts of linguistic subterfuge. It would fail to capture the rhetorical tools with which a good historian makes her case memorable and persuasive. And it is hard to see how a language model could ever truly innovate, for that is antithetical to what it is designed to do, which is simply to ape, mimic, and, as a statistician would put it, regress to the mean, which tends toward the mind-numbingly drab.
Language use is the opposite of that, and nowhere more so than among young people, whose stylistic invention in speech and digital discourse is enormous and even rather joyous. Progressive educationalists long to tap into the multimodality of the online sphere, but are struggling to figure out how. Crucially, the media-savvy communicative sophistication of Gen Z relies on the assumption of shared concepts, standards, and points of reference, just as much as does the compact and stylized language of a Tang-dynasty poem. If space is made in education for this facet of language use, it may well be immune to colonization by AI.
There’s no denying that large language models raise a host of ethical and legal issues. So did the internet, and so do many other forms of deep-learning AI, such as deep fakes and face recognition. We grapple with these problems and learn to live with them. Some believe that, panic about AI-based essay writing aside, large language models will eventually become just another tool, perhaps situated somewhere between the humdrum utilitarianism of Excel spreadsheets and the creative possibilities of digital photography.
After all, written language is itself often a tool, a means to an end. This doesn’t mean we need to learn to use it only crudely. But it does mean that what the likes of ChatGPT can now offer is not only adequate for certain purposes but might even be valuable. People who struggle with literacy skills are already using ChatGPT to improve their letters of job application or their email correspondence. Some scientists are even using it to burnish their papers before submission. Arguably these AIs can democratize language in the way music software has democratized music-making. For those with limited English-language skills who need to write professionally, the technology could be a great leveller.
And just as electronic calculators freed up students’ time and mental space once monopolized by having to learn logarithms for complicated multiplication, large language models might liberate pupils from having to master the nuances of spelling, syntax, and punctuation so that they can focus on the tasks of constructing a good argument or developing rhythm and variety in their sentences.
The absence of imagination, style, and flair meanwhile makes large language models no threat to fiction writers: “It’s the aesthetic dimension of writing that is really hard for an AI to emulate,” says Rowsell. But that is just what could make them an important new tool.
For example, I suspect it would help students see what makes a story or an essay come alive to get them to improve on ChatGPT’s dull output. Perhaps in much the same way that some musicians are using music-generating AI as a source of copious raw material, large language models might provide kernels of ideas that writers can sift, select, and work on.
Here, as elsewhere, AI holds up a mirror to ourselves, revealing in its shortcomings what cannot be automated and algorithmized: in other words, what constitutes the core of humanity.
Philip Ball is a science writer and author based in London. His most recent book is The Book of Minds.
Lead image: Boboshko-Studio / Shutterstock
One question for Paul Sutter, author of “The Remarkable Emptiness of Existence,” an article in Nautilus this month. Sutter is a theoretical cosmologist at the Institute for Advanced Computational Science at Stony Brook University, where he studies cosmic voids, maps the leftover light from the big bang, and develops new techniques for finding the first stars to appear in the cosmos.
What is our universe expanding into?
That’s a great question. The answer, though, is that it’s not a great question. It’s a little tricky, so let me walk you through it. Yes, our universe is expanding. Our universe has no center and no edge. The Big Bang didn’t happen in one location in space. The Big Bang happened everywhere in the cosmos simultaneously. The Big Bang was not a point in space. It was a point in time. It exists in all of our paths.
When we say the universe is expanding, we mean that if you map a bunch of galaxies and measure the average distances between those galaxies, you’ll get a number, and then you wait a little bit, a year, a billion years, whatever, and then you go to make the same measurement. Those distances, the average distances, are going to be larger. When we say we live in an expanding universe, we’re saying that the distances between galaxies, on average, grows with time. And that’s it. There’s no center, there’s no edge.
From our perspective here in the Milky Way, it looks like the entire universe is expanding away from us. But if I go to literally any other galaxy in the entire cosmos, I get the same view. It looks like all the galaxies are expanding away from me. The universe doesn’t expand into anything or from anywhere. The universe expands from itself and into itself. And I know that’s very hard to visualize, but thankfully, we have these powerful tools like mathematics that allow us to grapple with techniques that even we can’t imagine.
The Big Bang happened everywhere in the cosmos simultaneously.
There does not have to be a frontier. We can define this very well mathematically, but let me ask you, “What is the center of the Earth?” You’ll say, “It’s the core where all the molten iron stuff is and all the mole people live. It’s in the center of the Earth.” OK. You can point to that. But what if I were to ask you a slightly different question instead: “What is the center of the surface of the Earth?” Just latitude and longitude, give me the center of that. There is no answer.
We have a North Pole. We have a South Pole. But those are arbitrary. You can put those wherever you want. And imagine our Earth was getting bigger and we were to measure the distance between New York and Paris, and then every year that distance is getting bigger and bigger and bigger. There’s no center, no edge. And yet the distance between any two points still grows. This is very easy to visualize in two dimensions. We live in a three-dimensional universe. I can’t imagine it. I can’t think of it. I can think of the analogies in two dimensions, and I can trust the mathematics to take me into three dimensions.
Lead image: Designua / Shutterstock