Singularity Hub

28 Jan. 2023

ARTIFICIAL INTELLIGENCE

AI Has Designed Bacteria-Killing Proteins From Scratch—and They Work
Karmela Padavic-Callaghan | New Scientist
“The AI, called ProGen, works in a similar way to AIs that can generate text. ProGen learned how to generate new proteins by learning the grammar of how amino acids combine to form 280 million existing proteins. Instead of the researchers choosing a topic for the AI to write about, they could specify a group of similar proteins for it to focus on. In this case, they chose a group of proteins with antimicrobial activity.”

DIGITAL MEDIA

BuzzFeed to Use ChatGPT Creator OpenAI to Help Create Quizzes and Other Content
Alexandra Bruell | The Wall Street Journal
“BuzzFeed Inc. said it would rely on ChatGPT creator OpenAI to enhance its quizzes and personalize some content for its audiences, becoming the latest digital publisher to embrace artificial intelligence. In a memo to staff sent Thursday morning, which was reviewed by The Wall Street Journal, Chief Executive Jonah Peretti said he intends for AI to play a larger role in the company’s editorial and business operations this year.

ROBOTICS

Metal Robot Can Melt Its Way Out of Tight Spaces to Escape
Karmela Padavic-Callaghan | New Scientist
“A miniature, shape-shifting robot can liquefy itself and reform, allowing it to complete tasks in hard-to-access places and even escape cages. It could eventually be used as a hands-free soldering machine or a tool for extracting swallowed toxic items.”

FUTURE

Don’t Be Sucked in by AI’s Head-Spinning Hype Cycles
Devin Coldewey | TechCrunch
“[AI] certainly can outplay any human at chess or go, and it can predict the structure of protein chains; it can answer any question confidently (if not correctly) and it can do a remarkably good imitation of any artist, living or dead. But it is difficult to tease out which of these things is important, and to whom, and which will be remembered as briefly diverting parlor tricks in 5 or 10 years, like so many innovations we have been told are going to change the world.”

SPACE

NASA Announces Successful Test of New Propulsion Technology for Treks to Deep Space
Kevin Hurler | Gizmodo
“The rotating detonation rocket engine, or RDRE, generates thrust with detonation, in which a supersonic exothermic front accelerates to produce thrust, much the same way a shockwave travels through the atmosphere after something like TNT explodes. NASA says that this design uses less fuel and provides more thrust than current propulsion systems and that the RDRE could be used to power human landers, as well as crewed missions to the Moon, Mars, and deep space.

ARTIFICIAL INTELLIGENCE

The Best Use for AI Eye Contact Tech Is Making Movie Stars Look Straight at the Camera
James Vincent | The Verge
“This tech comes with a bunch of interesting questions, of course. Like: is constant unbroken eye contact good or a bit creepy? Are these tools useful for people who don’t naturally like eye contact? …But forget that high-brow trash for now, because here’s the stupidest and best use case of this technology yet: editing movie scenes so actors make eye contact with the camera.”

SCIENCE

Researchers Look a Dinosaur in Its Remarkably Preserved Face
Jeanne Timmons | Ars Technica
Borealopelta markmitchelli found its way back into the sunlight in 2017, millions of years after it had died. This armored dinosaur is so magnificently preserved that we can see what it looked like in life. Almost the entire animal—the skin, the armor that coats its skin, the spikes along its side, most of its body and feet, even its face—survived fossilization. It is, according to Dr. Donald Henderson, curator of dinosaurs at the Royal Tyrrell Museum, a one-in-a-billion find.”

TECH

Google, Not OpenAI, Has the Most to Gain From Generative AI
Mark Sullivan | Fast Company
“After spending billions on artificial intelligence R&D and acquisitions, Google finds itself ceding the AI limelight to OpenAI, an upstart that has captured the popular imagination with the public beta of its startlingly conversant chatbot, ChatGPT. Now Google reportedly fears the ChatGPT AI could reinvent search, its cornerstone business. But Google, which declared itself an ‘AI-first’ company in 2017, may yet regain its place in the sun. Its AI investments, which date back to the 2000s, may pay off, and could even power the company’s next quarter century of growth (Google turns 25 this year). Here’s why.

BIOTECH

CRISPR Wants to Feed the World
Jennifer Doudna | Wired
A great deal of the attention surrounding CRISPR has focused on the medical applications, and for good reason: The results are promising, and the personal stories are uplifting, offering hope to many who have suffered from long-neglected genetic diseases. In 2023, as CRISPR moves into agriculture and climate, we will have the opportunity to radically improve human health in a holistic way that can better safeguard our society and enable millions of people around the world to flourish.

ETHICS

A Watermark for Chatbots Can Expose Text Written by an AI
Melissa Heikkilä | MIT Technology Review
“Hidden patterns purposely buried in AI-generated texts could help identify them as such, allowing us to tell whether the words we’re reading are written by a human or not. These ‘watermarks’ are invisible to the human eye but let computers detect that the text probably comes from an AI system. If embedded in large language models, they could help prevent some of the problems that these models have already caused.”

SCIENCE

Earth’s Inner Core: A Shifting, Spinning Mystery’s Latest Twist
Dennis Overbye | The New York Times
“Imagine Earth’s inner core—the dense center of our planet—as a heavy, metal ballerina. This iron-rich dancer is capable of pirouetting at ever-changing speeds. That core may be on the cusp of a big shift. Seismologists reported Monday in the journal Nature Geoscience that after brief but peculiar pauses, the inner core changes how it spins—relative to the motion of Earth’s surface—perhaps once every few decades. And, right now, one such reversal may be underway.”

Image Credit: Robert Linder / Unsplash

27 Jan. 2023

Asteroid mining has long caught the imagination of space entrepreneurs, but conventional wisdom has always been that it’s little more than a pipe dream. That may be about to change after a startup announced plans to launch two missions this year designed to validate its space mining technology.

There are estimated to be trillions of dollars worth of precious metals locked up in asteroids strewn throughout the solar system. Given growing concerns about the scarcity of key materials required for batteries and other electronics, there’s been growing interest in attempts to extract these resources.

The enormous cost of space missions and the huge technical challenges involved in mining in space have led many to dismiss the idea as unworkable. The industry has already seen one boom and bust cycle after leading players like Deep Space Industries folded after investors lost their nerve.

But now, California-based startup AstroForge has taken concrete steps toward its goal of becoming the first company to mine an asteroid and bring the materials back to Earth. This year it will launch two missions, one designed to test out its in-space mineral extraction technology and another that will carry out a survey mission of a promising asteroid close to Earth.

With a finite supply of precious metals on Earth, we have no other choice than to look to deep space to source cost-effective and sustainable materials,” CEO and co-founder Matt Gialich said in a statement.

The company, which raised $13 million in seed funding last April, is planning to target asteroids rich in platinum group metals in deep space. These materials are in major demand in many high-tech industries, but their reserves are limited and geographically concentrated. Extracting them can also be very environmentally damaging.

AstroForge is developing mineral refining technology that it hopes will allow it to extract precious metals from these asteroids and return them to Earth. A prototype will catch a lift into orbit on a spacecraft designed by OrbAstro and launched by a SpaceX Falcon 9 rocket in April. It will be pre-loaded with asteroid-like material, which it will then attempt to vaporize and sort into its different chemical constituents.

Then in October, the company will attempt an even more ambitious mission. A 220-pound spacecraft also designed by OrbAstro, called Brokkr-2, will attempt an 8-month journey to reach an asteroid orbiting the sun about 22 million miles from Earth. It will carry a host of instruments designed to assess the target asteroid in situ.

Both of these missions are precursors designed to test out systems that will be needed for AstroForge’s first proper asteroid mining mission, expected later this decade. The company plans to target asteroids between 66 to 4,920 feet in diameter and break them apart from a distance before collecting the remains.

Even if these missions are a success, there’s still a long road towards making space mining practical. According to research AstroForge recently conducted with the Colorado School of Mines, the bulk of metal-rich asteroids are found in the asteroid belt between Mars and Jupiter, which is currently a 14-year round trip.

Nonetheless, off-world mining does appear to be having somewhat of a renaissance, with dozens of space resources startups springing up in recent years. If AstroForge succeeds in proving out its technology this year, it could give this fledgling industry a major boost.

Image Credit: NASA

26 Jan. 2023

Even if you think you are good at analyzing faces, research shows many people cannot reliably distinguish between photos of real faces and images that have been computer-generated. This is particularly problematic now that computer systems can create realistic-looking photos of people who don’t exist.

A few years ago, a fake LinkedIn profile with a computer-generated profile picture made the news because it successfully connected with US officials and other influential individuals on the networking platform, for example. Counter-intelligence experts even say that spies routinely create phantom profiles with such pictures to home in on foreign targets over social media.

These deepfakes are becoming widespread in everyday culture which means people should be more aware of how they’re being used in marketing, advertising, and social media. The images are also being used for malicious purposes, such as political propaganda, espionage, and information warfare.

Making them involves something called a deep neural network, a computer system that mimics the way the brain learns. This is “trained” by exposing it to increasingly large data sets of real faces.

In fact, two deep neural networks are set against each other, competing to produce the most realistic images. As a result, the end products are dubbed GAN images, where GAN stands for “generative adversarial networks.” The process generates novel images that are statistically indistinguishable from the training images.

In a study published in iScience, my colleagues and I showed that a failure to distinguish these artificial faces from the real thing has implications for our online behavior. Our research suggests the fake images may erode our trust in others and profoundly change the way we communicate online.

We found that people perceived GAN faces to be even more real-looking than genuine photos of actual people’s faces. While it’s not yet clear why this is, this finding does highlight recent advances in the technology used to generate artificial images.

And we also found an interesting link to attractiveness: faces that were rated as less attractive were also rated as more real. Less attractive faces might be considered more typical, and the typical face may be used as a reference against which all faces are evaluated. Therefore, these GAN faces would look more real because they are more similar to mental templates that people have built from everyday life.

But seeing these artificial faces as authentic may also have consequences for the general levels of trust we extend to a circle of unfamiliar people—a concept known as “social trust.”

We often read too much into the faces we see, and the first impressions we form guide our social interactions. In a second experiment that formed part of our latest study, we saw that people were more likely to trust information conveyed by faces they had previously judged to be real, even if they were artificially generated.

It is not surprising that people put more trust in faces they believe to be real. But we found that trust was eroded once people were informed about the potential presence of artificial faces in online interactions. They then showed lower levels of trust, overall—independently of whether the faces were real or not.

This outcome could be regarded as useful in some ways, because it made people more suspicious in an environment where fake users may operate. From another perspective, however, it may gradually erode the very nature of how we communicate.

In general, we tend to operate on a default assumption that other people are basically truthful and trustworthy. The growth in fake profiles and other artificial online content raises the question of how much their presence and our knowledge about them can alter this “truth default” state, eventually eroding social trust.

Changing Our Defaults

The transition to a world where what’s real is indistinguishable from what’s not could also shift the cultural landscape from being primarily truthful to being primarily artificial and deceptive.

If we are regularly questioning the truthfulness of what we experience online, it might require us to re-deploy our mental effort from the processing of the messages themselves to the processing of the messenger’s identity. In other words, the widespread use of highly realistic, yet artificial, online content could require us to think differently—in ways we hadn’t expected to.

In psychology, we use a term called “reality monitoring” for how we correctly identify whether something is coming from the external world or from within our brains. The advance of technologies that can produce fake, yet highly realistic, faces, images, and video calls means reality monitoring must be based on information other than our own judgments. It also calls for a broader discussion of whether humankind can still afford to default to truth.

It’s crucial for people to be more critical when evaluating digital faces. This can include using reverse image searches to check whether photos are genuine, being wary of social media profiles with little personal information or a large number of followers, and being aware of the potential for deepfake technology to be used for nefarious purposes.

The next frontier for this area should be improved algorithms for detecting fake digital faces. These could then be embedded in social media platforms to help us distinguish the real from the fake when it comes to new connections’ faces.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: The faces in this article’s banner image may look realistic, but they were generated by a computer. NVIDIA via thispersondoesnotexist.com

25 Jan. 2023

Ten years ago, a little-known bacterial defense mechanism skyrocketed to fame as a powerful genome editor. In the decade since, CRISPR-Cas9 has spun off multiple variants, expanding into a comprehensive toolbox that can edit the genetic code of life.

Far from an ivory tower pursuit, its practical uses in research, healthcare, and agriculture came fast and furious.

You’ve seen the headlines. The FDA approved its use in tackling the underlying genetic mutation for sickle cell disease. Some researchers edited immune cells to fight untreatable blood cancers in children. Others took pig-to-human organ transplants from dream to reality in an attempt to alleviate the shortage of donor organs. Recent work aims to help millions of people with high cholesterol—and potentially bring CRISPR-based gene therapy to the masses—by lowering their chances of heart disease with a single injection.

But to Dr. Jennifer Doudna, who won the Nobel Prize in 2020 for her role in developing CRISPR, we’re just scratching the surface of its potential. Together with graduate student Joy Wang, Doudna laid out a roadmap for the technology’s next decade in an article in Science.

If the 2010s were focused on establishing the CRISPR toolbox and proving its effectiveness, this decade is when the technology reaches its full potential. From CRISPR-based therapies and large-scale screens for disease diagnostics to engineering high-yield crops and nutritious foods, the technology “and its potential impact are still in their early stages,” the authors wrote.

A Decade of Highlights

We’ve spilt plenty of ink on CRISPR advances, but it pays to revisit the past to predict the future—and potentially scout out problems along the way.

One early highlight was CRISPR’s incredible ability to rapidly engineer animal models of disease. Its original form easily snips away a targeted gene in a very early embryo, which when transplanted into a womb can generate genetically modified mice in just a month, compared to a year using previous methods. Additional CRISPR versions, such as base editing—swapping one genetic letter for another—and prime editing—which snips the DNA without cutting both strands—further boosted the toolkit’s flexibility at engineering genetically-altered organoids (think mini-brains) and animals. CRISPR rapidly established dozens of models for some of our most devasting and perplexing diseases, including various cancers, Alzheimer’s, and Duchenne muscular dystrophy—a degenerative disorder in which the muscle slowly wastes away. Dozens of CRISPR-based trials are now in the works.

CRISPR also accelerated genetic screening into the big data age. Rather than targeting one gene at a time, it’s now possible to silence, or activate, thousands of genes in parallel, forming a sort of Rosetta stone for translating genetic perturbations into biological changes. This is especially important for understanding genetic interactions, such as those in cancer or aging that we weren’t previously privy to, and gaining new ammunition for drug development.

But a crowning achievement for CRISPR was multiplexed editing. Like simultaneously tapping on multiple piano keys, this type of genetic engineering targets multiple specific DNA areas, rapidly changing a genome’s genetic makeup in one go.

The technology works in plants and animals. For eons, people have painstakingly bred crops with desirable features—be it color, size, taste, nutrition, or disease resilience. CRISPR can help select for multiple traits or even domesticate new crops in just one generation. CRISPR-generated hornless bulls, nutrient rich tomatoes, and hyper-muscular farm animals and fish are already reality. With the world population hitting 8 billion in 2022 and millions suffering from hunger, CRISPRed-crops may lend a lifeline—that is, if people are willing to accept the technology.

The Path Forward

Where do we go from here?

To the authors, we need to further boost CRISPR’s effectiveness and build trust. This means going back to the basics to increase the tool’s editing accuracy and precision. Here, platforms to rapidly evolve Cas enzymes, the “scissor” component of the CRISPR machinery, are critical.

There have already been successes: one Cas version, for example, acts as a guardrail for the targeting component—the sgRNA “bloodhound.” In classic CRISPR, the sgRNA works alone, but in this updated version, it struggles to bind without Cas assistance. This trick helps tailor the edit to a specific DNA site and increases accuracy so the cut works as predicted.

Similar strategies can also boost precision with fewer side effects or insert new genes in cells such as neurons and others that no longer divide. While already possible with prime editing, its efficiency can be 30 times lower than classic CRISPR mechanisms.

“A main goal for prime editing in the next decade is improving efficiency without compromising editing product purity—an outcome that has the potential to turn prime editing into one of the most versatile tools for precision editing,” the authors said.

But perhaps more important is delivery, which remains a bottleneck especially for therapeutics. Currently, CRISPR is generally used on cells outside the body that are infused back—as in the case of CAR-T—or in some cases, tethered to a viral carrier or encapsulated in fatty bubbles and injected into the body. There have been successes: in 2021, the FDA approved the first CRISPR-based shot to tackled a genetic blood disease, transthyretin amyloidosis.

Yet both strategies are problematic: not many types of cells can survive the CAR-T treatment—dying when reintroduced into the body—and targeting specific tissues and organs remains mostly out of reach for injectable therapies.

A key advance for the next decade, the authors said, is to shuttle the CRISPR cargo into the targeted tissue without harm and release the gene editor at its intended spot. Each of these steps, though seemingly simple on paper, presents its own set of challenges that will require both bioengineering and innovation to overcome.

Finally, CRISPR can synergize with other technological advances, the authors said. For example, by tapping into cell imaging and machine learning, we could soon engineer even more efficient genome editors. Thanks to faster and cheaper DNA sequencing, we can then easily monitor gene-editing consequences. These data can then provide a kind of feedback mechanism with which to engineer even more powerful genome editors in a virtuous loop.

Real-World Impact

Although further expanding the CRISPR toolbox is on the agenda, the technology is sufficiently mature to impact the real world in its second decade, the authors said.

In the near future, we should see “an increased number of CRISPR-based treatments moving to later stages of clinical trials.” Looking further ahead, the technology, or its variants, could make pig-to-human organ xenotransplants routine, rather than experimental. Large-scale screens for genes that lead to aging or degenerative brain or heart diseases—our top killers today—could yield prophylactic CRISPR-based treatments. It’s no easy task: we need both knowledge of the genetics underlying multifaceted genetic diseases—that is, when multiple genes come into play—and a way to deliver the editing tools to their target. “But the potential benefits may drive innovation in these areas well beyond what is possible today,” the authors said.

Yet with greater power comes greater responsibility. CRISPR has advanced at breakneck speed, and regulatory agencies and the public are still struggling to catch up. Perhaps the most notorious example was that of the CRISPR babies, where experiments carried out against global ethical guidelines propelled an international consortium to lay down a red line for human germ-cell editing.

Similarly, genetically modified organisms (GMOs) remain a controversial topic. Although CRISPR is far more precise than previous genetic tools, it’ll be up to consumers to decide whether to welcome a new generation of human-evolved foods—both plant and animal.

These are important conversations that need global discourse as CRISPR enters its second decade. But to the authors, the future looks bright.

“Just as during the advent of CRISPR genome editing, a combination of scientific curiosity and the desire to benefit society will drive the next decade of innovation in CRISPR technology,” they said. “By continuing to explore the natural world, we will discover what cannot be imagined and put it to real-world use for the benefit of the planet.”

Image Credit: NIH

23 Jan. 2023

Boosting the role of renewables in our electricity supply will require a massive increase in grid-scale energy storage. But new research suggests that electric vehicle batteries could meet short-term storage demands by as soon as 2030.

While solar and wind are rapidly becoming the cheapest source of electricity in many parts of the world, their intermittency is a significant problem. One potential solution is to use batteries to store energy for times when the sun doesn’t shine and the wind doesn’t blow, but building enough capacity to serve entire power grids would be enormously costly.

That’s why people have suggested making use of the huge number of batteries being installed in the ever-growing global fleet of electric vehicles. The idea is that when they’re not on the road, utilities could use these batteries to store excess energy and draw from it when demand spikes.

While there have been some early pilots, so far it has been unclear whether the idea really has legs. Now, a new economic analysis led by researchers at Leiden University in the Netherlands suggests that electric vehicle batteries could play a major role in grid-scale storage in the relatively near future.

There are two main ways that these batteries could aid the renewables transition, according to the team’s study published in Nature Communications. Firstly, so-called vehicle-to-grid technology could make it possible to do smart vehicle charging, only charging cars when power demand is low. It could also make it possible for vehicle owners to temporarily store electricity for utilities for a price.

But old car batteries could also make a significant contribution. Their capacity declines over repeated charge and discharge cycles, and batteries typically become unsuitable for use in electric vehicles by the time they drop to 70 to 80 percent of their original capacity. That’s because they can no longer hold enough power to make up for their added weight. Weight isn’t a problem for grid-scale storage though, so these car batteries can be repurposed.

The researchers note that the lithium-ion batteries used in cars are probably only suitable for short-term storage of under four hours, but this accounts for most of the projected demand. So far though, there hasn’t been a comprehensive study of how large a contribution both current and retired electric vehicle batteries could play in the future of the grid.

To try and fill that gap, the researchers combined data on how many batteries are estimated to be produced over the coming years, how quickly batteries will degrade based on local conditions, and how electric vehicles are likely to be used in different countries—for instance, how many miles people drive in a day and how often they charge.

They found that the total available storage capacity from these two sources by 2050 was likely to be between 32 and 62 terawatt-hours. The authors note that this is significantly higher than the 3.4 to 19.2 terawatt-hours the world is predicted to need by 2050, according to the International Renewable Energy Agency and research group Storage Lab.

However, not every electric vehicle owner is likely to participate in vehicle-to-grid schemes and not all batteries will get repurposed at the end of their lives. So the researchers investigated how different participation rates would impact the ability of electric vehicle batteries to contribute to grid storage.

They found that to meet global demand by 2050, only between 12 and 43 percent of vehicle owners would need to take part in vehicle to grid schemes. If only half of secondhand batteries are used for grid storage, the required participation rates would drop to just 10 percent. In the most optimistic scenarios, electric vehicle batteries could meet demand by 2030.

Lots of factors will impact whether or not this could ever be achieved, including things like how quickly vehicle-to-grid infrastructure can be rolled out, how easy it is to convince vehicle owners to take part, and the economics of recycling car batteries at the end of their lives. The authors note that governments can and should play a role in incentivizing participation and mandating the reuse of old batteries.

But either way, the results suggest there may be a promising alternative to a costly and time-consuming rollout of dedicated grid storage. Electric vehicle owners may soon be doing their part for the environment twice over.

Image Credit: Shutterstock.com/Roman Zaiets

22 Jan. 2023

Google is one of the biggest companies on Earth. Google’s search engine is the front door to the internet. And according to recent reports, Google is scrambling.

Late last year, OpenAI, an artificial intelligence company at the forefront of the field, released ChatGPT. Alongside Elon Musk’s Twitter acquisition and fallout from FTX’s crypto implosion, breathless chatter about ChatGPT and generative AI has been ubiquitous.

The chatbot, which was born from an upgrade to OpenAI’s GPT-3 algorithm, is like a futuristic Q&A machine. Ask any question, and it responds in plain language. Sometimes it gets the facts straight. Sometimes not so much. Still, ChatGPT took the world by storm thanks to the fluidity of its prose, its simple interface, and a mainstream launch.

When a new technology hits public consciousness, people try to sort out its impact. Between debates about how bots like ChatGPT will impact everything from academics to journalism, not a few folks have suggested ChatGPT may end Google’s reign in search. Who wants to hunt down information fragmented across a list of web pages when you could get a coherent, seemingly authoritative, answer in an instant?

In December, The New York Times reported Google was taking the prospect seriously, with management declaring a “code red” internally. This week, as Google announced layoffs, CEO Sundar Pichai told employees the company will sharpen its focus on AI. The NYT also reported Google founders, Larry Page and Sergey Brin, are now involved in efforts to streamline development of AI products. The worry is that they’ve lost a step to the competition.

If true, it isn’t due to a lack of ability or vision. Google’s no slouch at AI.

The technology here—a flavor of deep learning model called a transformer—was developed at Google in 2017. The company already has its own versions of all the flashy generative AI models, from images (Imagen) to text (LaMDA). Indeed, in 2021, Google researchers published a paper pondering how large language models (like ChatGPT) might radically upend search in the future.

“What if we got rid of the notion of the index altogether and replaced it with a pre-trained model that efficiently and effectively encodes all of the information contained in the corpus?” Donald Metzler, a Google researcher, and coauthors wrote at the time. “What if the distinction between retrieval and ranking went away and instead there was a single response generation phase?” This should sound familiar.

Whereas smaller organizations opened access to their algorithms more aggressively, however, Google largely kept its work under wraps. Offering only small, tightly controlled demos to limited groups of people, it deemed the tech too risky and error-prone for wider release just yet. Damage to its brand and reputation was a chief concern.

Now, sweating it out under the bright lights of ChatGPT, the company is planning to release some 20 AI-powered products later this year, according to the NYT. These will encompass all the top generative AI applications, like image, text, and code generation—and they’ll test a ChatGPT-like bot in search.

But is the technology ready to go from splashy demo tested by millions to a crucial tool trusted by billions? In their 2021 paper, the Google researchers suggested an ideal chatbot search assistant would be authoritative, transparent, unbiased, accessible, and contain diverse perspectives. Acing each of those categories is still a stretch for even the most advanced large language models.

Trust matters with search in particular. When it serves up a list of web pages today, Google can blame content creators for poor quality and vow to serve better results in the future. With an AI chatbot, it is the content creator.

As Fast Company’s Harry McCracken pointed out not long ago, if ChatGPT can’t get its facts straight, nothing else matters. “Whenever I chat with ChatGPT about any subject I know much about, such as the history of animation, I’m most struck by how deeply untrustworthy it is,” McCracken wrote. “If a rogue software engineer set out to poison our shared corpus of knowledge by generating convincing-sounding misinformation in bulk, the end result might look something like this.”

Google is clearly aware of the risk. And whatever implementation in search it unveils this year, it still aims to prioritize “getting the facts right, ensuring safety, and getting rid of misinformation.” How it will accomplish these goals is an open question. Just in terms of “ensuring safety,” for example, Google’s algorithms underperform OpenAI’s on metrics of toxicity, according to the NYT. But a Time investigation this week reported that OpenAI had to turn, at least in part, to human workers in Kenya, paid a pittance, to flag and scrub the most toxic data from ChatGPT.

Other questions, including about the copyright of works used to train generative algorithms, remain similarly unresolved. Two copyright lawsuits, one by Getty Images and one by a group of artists, were filed earlier this week.

Still, the competitive landscape, it seems, is compelling Google, Microsoft—who has invested big in OpenAI and is already incorporating its algorithms into products—and others to go full steam ahead in an effort to minimize the risk of being left behind. We’ll have to wait and see what an implementation in search looks like. Maybe it’ll be in beta with a disclaimer for awhile, or maybe, as the year progresses, the tech will again surprise us with breakthroughs.

In either case, while generative AI will play a role in search, how much of a role and how soon is less settled. As to whether Google loses its perch? OpenAI’s CEO, Sam Altman, pushed back against the hype this week.

“I think whenever someone talks about a technology being the end of some other giant company, it’s usually wrong,” Altman said in response to a question about the likelihood ChatGPT dethrones Google. “I think people forget they get to make a countermove here, and they’re like pretty smart, pretty competent. I do think there’s a change for search that will probably come at some point—but not as dramatically as people think in the short term.”

Image Credit: D21_Gallery / Unsplash

21 Jan. 2023

ARTIFICIAL INTELLIGENCE

What Happens When AI Has Read Everything?
Ross Andersen | The Atlantic
“Artificial intelligence has in recent years proved itself to be a quick study, although it is being educated in a manner that would shame the most brutal headmaster. Locked into airtight Borgesian libraries for months with no bathroom breaks or sleep, AIs are told not to emerge until they’ve finished a self-paced speed course in human culture. On the syllabus: a decent fraction of all the surviving text that we have ever produced.”

GENE EDITING

Next Up for CRISPR: Gene Editing for the Masses?
Jessica Hamzelou | MIT Technology Review
We know the basics of healthy living by now. A balanced diet, regular exercise, and stress reduction can help us avoid heart disease—the world’s biggest killer. But what if you could take a vaccine, too? And not a typical vaccine—one shot that would alter your DNA to provide lifelong protection? That vision is not far off, researchers say. Advances in gene editing, and CRISPR technology in particular, may soon make it possible.

ETHICS

OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic
Billy Perrigo | Time
“ChatGPT’s creator, OpenAI, is now reportedly in talks with investors to raise funds at a $29 billion valuation, including a potential $10 billion investment by Microsoft. That would make OpenAI, which was founded in San Francisco in 2015 with the aim of building superintelligent machines, one of the world’s most valuable AI companies. But the success story is not one of Silicon Valley genius alone. In its quest to make ChatGPT less toxic, OpenAI used outsourced Kenyan laborers earning less than $2 per hour, a TIME investigation has found.”

ROBOTICS

Boston Dynamics’ Atlas Robot Grows a Set of Hands, Attempts Construction Work
Ron Amadeo | Ars Technica
“Atlas isn’t just clumsily picking things up and carrying them, though. It’s running, jumping, and spinning while carrying heavy objects. At one point it jumps and throws the heavy toolbox up to its construction partner, all without losing balance. It’s doing all this on rickety scaffolding and improvised plank walkways, too, so the ground is constantly moving under Atlas’ feet with every step. Picking up stuff is the start of teaching the robot to do actual work, and it looks right at home on a rough-and-tumble construction site.”

BIOTECH

These Scientists Used CRISPR to Put an Alligator Gene Into Catfish
Jessica Hamzelou | MIT Technology Review
“Millions of fish are farmed in the US every year, but many of them die from infections. In theory, genetically engineering fish with genes that protect them from disease could reduce waste and help limit the environmental impact of fish farming. A team of scientists have attempted to do just that—by inserting an alligator gene into the genomes of catfish.”

3D PRINTING

Can 3D Printing Help Solve the Housing Crisis?
Rachel Monroe | The New Yorker
“Until last year, Icon, one of the biggest and best-funded companies in the field, had printed fewer than two dozen houses, most of them essentially test cases. But, when I met Ballard, the company had recently announced a partnership with Lennar, the second-largest home-builder in the United States, to print a hundred houses in a development outside Austin. A lot was riding on the project, which would be a test of whether the technology was ready for the mainstream.”

FUTURE

1923 Cartoon Eerily Predicted 2023’s AI Art Generators
Benj Edwards | Ars Technica
“[The vintage cartoon] depicts a cartoonist standing by his drawing table and making plans for social events while an ‘idea dynamo’ generates ideas and a ‘cartoon dynamo’ renders the artwork. Interestingly, this separation of labor feels similar to our neural networks of today. In the actual 2023, the ‘idea dynamo’ would likely be a large language model like GPT-3 (albeit imperfectly), and the ‘cartoon dynamo’ is most similar to an image-synthesis model like Stable Diffusion.”

TECH

OpenAI CEO Sam Altman on GPT-4: ‘People Are Begging to Be Disappointed and They Will Be’
James Vincent | The Verge
“GPT-3 came out in 2020, and an improved version, GPT 3.5, was used to create ChatGPT. The launch of GPT-4 is much anticipated, with more excitable members of the AI community and Silicon Valley world already declaring it to be a huge leap forward. …’The GPT-4 rumor mill is a ridiculous thing. I don’t know where it all comes from,’ said the OpenAI CEO. ‘People are begging to be disappointed and they will be. The hype is just like… We don’t have an actual AGI and that’s sort of what’s expected of us.’i

COMPUTING

Are We Living in a Computer Simulation, and Can We Hack It?
Dennis Overbye | The New York Times
“If you could change the laws of nature, what would you change? Maybe it’s that pesky speed-of-light limit on cosmic travel—not to mention war, pestilence and the eventual asteroid that has Earth’s name on it. Maybe you would like the ability to go back in time— to tell your teenage self how to deal with your parents, or to buy Google stock. Couldn’t the universe use a few improvements?”

Image Credit: Victor Crespo / Unsplash

20 Jan. 2023

In 2020, California-based Good Meat became the first company in the world to start selling lab-grown meat. Its cultured chicken has been on the market in Singapore since then, and though it’s still awaiting FDA approval to sell its products in the US, this week the company reached another milestone when it received approval to sell serum-free meat in Singapore.

The approval was granted by the Singapore Food Agency, and means Good Meat is allowed to use synthetic processes to create its products.

Cultured meat is grown from animal cells and is biologically the same as meat that comes from an animal. The process starts with harvesting muscle cells from an animal, then feeding those cells a mixture of nutrients and naturally-occurring growth factors (or, as Good Meat’s process specifies, amino acids, fats, and vitamins) so that they multiply, differentiate, then grow to form muscle tissue, in much the same way muscle grows inside animals’ bodies.

Usually, getting animal cells to duplicate requires serum. One of the more common is fetal bovine serum, which is made from the blood of fetuses extracted from cows during slaughter. It sounds a bit brutal even for the non-squeamish carnivore. Figuring out how to replicate the serum’s effects with synthetic ingredients has been one of the biggest hurdles to making cultured meat viable.

“Our research and development team worked diligently to replace serum with other nutrients that provide the same functionality, and their hard work over several years paid off,” said Andrew Noyes, head of communications at Good Meat’s parent company, Eat Just. The approval should allow for greater scalability, lower manufacturing costs, and a more sustainable product.

The company is in the process of building a demonstration plant in Singapore that will house a 6,000-liter bioreactor, which it says will be the largest in the industry to date and will have the capacity to make tens of thousands of pounds of meat per year.

The serum-free approval “complements the company’s work in Singapore to build and operate its bioreactor facility, where over 50 research scientists and engineers will develop innovative capabilities in the cultivated meat space such as media optimization, process development, and texturization of cultivated meat products,” said Damian Chan, executive vice president of the Singapore Economic Development Board.

It won’t be the only plant of its type. Israeli company Believer Meats opened a facility to produce lab-grown meat at scale in Israel in 2021, and last month started construction of a 200,000-square-foot factory in Wilson, North Carolina.

This past November a third player in the industry, Upside Foods, became the first company to receive a No Questions Letter from the FDA, essentially an approval saying its lab-grown chicken is safe for consumers to eat (though two additional approvals are still needed before the company can actually start selling the product).

The timing of the cultured meat industry’s advancement is convenient, though not coincidental; more consumers are becoming conscious of factory farming’s negative environmental impact, and they’re looking for eco-friendly alternatives. Cultured meat will allow them to eat real meat (as opposed to plant-based “meat”) with a far smaller environmental impact and no animals harmed to boot.

It remains to be seen whether scaling production will go as smoothly as Good Meat and its competitors are hoping, as well as how long it will take for the products to reach price parity with regular meat. But if the industry’s recent streak of clearing hurdles continues, lab-grown meat may soon be found in restaurants and on grocery shelves.

Image Credit: Good Meat

19 Jan. 2023

Two major astronomy research programs, called EMU and PEGASUS, have joined forces to resolve one of the mysteries of our Milky Way: where are all the supernova remnants?

A supernova remnant is an expanding cloud of gas and dust marking the last phase in the life of a star, after it has exploded as a supernova. But the number of supernova remnants we have detected so far with radio telescopes is too low. Models predict five times as many, so where are the missing ones?

We have combined observations from two of Australia’s world-leading radio telescopes, the ASKAP radio telescope and the Parkes radio telescope, Murriyang, to answer this question.

The Gas Between the Stars

The new image reveals thin tendrils and clumpy clouds associated with hydrogen gas filling the space between the stars. We can see sites where new stars are forming, as well as supernova remnants.

In just this small patch, only about 1 percent of the whole Milky Way, we have discovered more than 20 new possible supernova remnants where only 7 were previously known.

These discoveries were led by PhD student Brianna Ball from Canada’s University of Alberta, working with her supervisor, Roland Kothes of the National Research Council of Canada, who prepared the image. These new discoveries suggest we are close to accounting for the missing remnants.

So why can we see them now when we couldn’t before?

The Power of Joining Forces

I lead the Evolutionary Map of the Universe or EMU program, an ambitious project with ASKAP to make the best radio atlas of the southern hemisphere.

EMU will measure about 40 million new distant galaxies and supermassive black holes to help us understand how galaxies have changed over the history of the universe.

Early EMU data have already led to the discovery of odd radio circles (or “ORCs”), and revealed rare oddities like the “Dancing Ghosts.”

For any telescope, the resolution of its images depends on the size of its aperture. Interferometers like ASKAP simulate the aperture of a much larger telescope. With 36 relatively small dishes (each 12m in diameter) but a 6km distance connecting the farthest of these, ASKAP mimics a single telescope with a 6km wide dish.

That gives ASKAP a good resolution, but comes at the expense of missing radio emission on the largest scales. In the comparison above, the ASKAP image alone appears too skeletal.

To recover that missing information, we turned to a companion project called PEGASUS, led by Ettore Carretti of Italy’s National Institute of Astrophysics.

PEGASUS uses the 64m diameter Parkes/Murriyang telescope (one of the largest single-dish radio telescopes in the world) to map the sky.

Even with such a large dish, Parkes has rather limited resolution. By combining the information from both Parkes and ASKAP, each fills in the gaps of the other to give us the best fidelity image of this region of our Milky Way galaxy. This combination reveals the radio emission on all scales to help uncover the missing supernova remnants.

Linking the datasets from EMU and PEGASUS will allow us to reveal more hidden gems. In the next few years we will have an unprecedented view of almost the entire Milky Way, about a hundred times larger than this initial image, but with the same level of detail and sensitivity.

We estimate there may be up to 1,500 or more new supernova remnants yet to discover. Solving the puzzle of these missing remnants will open new windows into the history of our Milky Way.


ASKAP and Parkes are owned and operated by CSIRO, Australia’s national science agency, as part of the Australia Telescope National Facility. CSIRO acknowledge the Wajarri Yamaji people as the Traditional Owners and native title holders of Inyarrimanha Ilgari Bundara, the CSIRO Murchison Radio-astronomy Observatory, where ASKAP is located, and the Wiradjuri people as the traditional owners of the Parkes Observatory.The Conversation

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: R. Kothes (NRC) and the PEGASUS team

18 Jan. 2023

Proteins are often called the building blocks of life.

While true, the analogy evokes images of Lego-like pieces snapping together to form intricate but rigid blocks that combine into muscles and other tissues. In reality, proteins are more like flexible tumbleweeds—highly sophisticated structures with “spikes” and branches protruding from a central frame—that morph and change with their environment.

This shapeshifting controls the biological processes of living things—for example, opening the protein tunnels dotted along neurons or driving cancerous growth. But it also makes understanding protein behavior and developing drugs that interact with proteins a challenge.

While recent AI breakthroughs in the prediction (and even generation) of protein structures are a huge advance 50 years in the making, they still only offer snapshots of proteins. To capture whole biological processes—and identify which lead to diseases—we need predictions of protein structures in multiple “poses” and, more importantly, how each of these poses changes a cell’s inner functions. And if we’re to rely on AI to solve the challenge, we need more data.

Thanks to a new protein atlas published this month in Nature, we now have a great start.

A collaboration between MIT, Harvard Medical School, Yale School of Medicine, and Weill Cornell Medical College, the study focused on a specific chemical change in proteins—called phosphorylation—that’s known to act as a protein on-off switch, and in many cases, lead to or inhibit cancer.

The atlas will help scientists dig into how signaling goes awry in tumors. But to Sean Humphrey and Elise Needham, doctors at the Royal Children’s Hospital and the University of Cambridge, respectively, who were not involved in the work, the atlas may also begin to help turn static AI predictions of protein shapes into more fluid predictions of how proteins behave in the body.

Let’s Talk About PTMs (Huh?)

After they’re manufactured, the surfaces of proteins are “dotted” with small chemical groups—like adding toppings to an ice cream cone. These toppings either enhance or turn off the protein’s activity. In other cases, parts of the protein get chopped off to activate it. Protein tags in neurons drive brain development; other tags plant red flags on proteins ready for disposal.

All these tweaks are called post-translational modifications (PTMs).

PTMs essentially transform proteins into biological microprocessors. They’re an efficient way for the cell to regulate its inner workings without needing to alter its DNA or epigenetic makeup. PTMs often dramatically change the structure and function of proteins, and in some cases, they could contribute to Alzheimer’s, cancer, stroke, and diabetes.

For Elisa Fadda at Maynooth University in Ireland and Jon Agirre at the University of York, it’s high time we incorporated PTMs into AI protein predictors like AlphaFold. While AlphaFold is changing the way we do structural biology, they said, “the algorithm does not account for essential modifications that affect protein structure and function, which gives us only part of the picture.”

The King PTM

So, what kinds of PTMs should we first incorporate into an AI?

Let me introduce you to phosphorylation. This PTM adds a chemical group, phosphate, to specific locations on proteins. It’s a “regulatory mechanism that is fundamental to life,” said Humphrey and Needham.

The protein hotspots for phosphorylation are well-known: two amino acids, serine and threonine. Roughly 99 percent of all phosphorylation sites are due to the duo, and previous studies have identified roughly 100,000 potential spots. The problem is identifying what proteins—dubbed kinases, of which there are hundreds—add the chemical groups to which hotspots.

In the new study, the team first screened over 300 kinases that specifically grab onto over 100 targets. Each target is a short string of amino acids containing serine and threonine, the “bulls-eye” for phosphorylation, and surrounded with different amino acids. The goal was to see how effective each kinase is at its job at every target—almost like a kinase matchmaking game.

This allowed the team to find the most preferred motif—sequence of amino acids—for each kinase. Surprisingly, “almost two-thirds of phosphorylation sites could be assigned to one of a small handful of kinases,” said Humphrey and Needham.

A Rosetta Stone

Based on their findings, the team grouped the kinases into 38 different motif-based classes, each with an appetite for a particular protein target. In theory, the kinases can catalyze over 90,000 known phosphorylation sites in proteins.

“This atlas of kinase motifs now lets us decode signaling networks,” said Yaffe.

In a proof-of-concept test, the team used the atlas to hunt down cellular signals that differ between healthy cells and those exposed to radiation. The test found 37 potential phosphorylation targets of a single kinase, most of which were previously unknown.

Ok, so what?

The study’s method can be used to track down other PTMs to begin building a comprehensive atlas of the cellular signals and networks that drive our basic biological functions.

The dataset, when fed into AlphaFold, RoseTTAFold, their variants, or other emerging protein structure prediction algorithms, could help them better predict how proteins dynamically change shape and interact in cells. This would be far more useful for drug discovery than today’s static protein snapshots. Scientist may also be able to use such tools to tackle the kinase “dark universe.” This subset of kinases, more than 100, have no discernible protein targets. In other words—we have no idea how these powerful proteins work inside the body.

“This possibility should motivate researchers to venture ‘into the dark’, to better characterize these elusive proteins,” said Humphrey and Needham.

The team acknowledges there’s a long road ahead, but they hope their atlas and methodology can influence others to build new databases. In the end, we hope “our comprehensive motif-based approach will be uniquely equipped to unravel the complex signaling that underlies human disease progressions, mechanisms of cancer drug resistance, dietary interventions and other important physiological processes,” they said.

Image Credit: DeepMind

SCAN TO VIEW AND BOOKMARK THIS PAGE ON YOUR PHONE
BACK TO TOP