At the end of last year, Israeli cultured meat company Believer Meats broke ground on a 200,000-square-foot factory outside Raleigh, North Carolina. The facility will be the biggest cultured meat factory in the world (well, unless a bigger one goes up before it’s done, which is unlikely).
However, the sale of cultured meat isn’t fully legal in the US yet (in fact, the only countries where the meat can be sold right now are Singapore and Israel), so regulations are going to need to keep pace with production capacity to make such facilities worth building. Last week California-based Good Meat took a step in this direction, receiving a crucial FDA approval for sale of its cultured chicken in the US.
Cultured meat is made by taking muscle cells from a live animal (without harming it) and feeding those cells a mixture of nutrients and growth factors to make them multiply, differentiate, and grow to form muscle tissue. The harvested tissue then needs to be refined and shaped into a final product, which can involve extrusion cooking, molding, or 3D printing.
Good Meat was the first company in the world to start selling cultured meat, with its chicken hitting the Singaporean market in 2020. This past January the company hit another milestone when the Singapore Food Agency granted them approval to sell serum-free meat in Singapore (“serum-free” means they can use synthetic ingredients in their production process, specifically eliminating fetal bovine serum, which makes animal cells duplicate).
Now Good Meat has made headway in what it hopes will be its biggest market, the US. They received an FDA approval called a No Questions letter, which states that after conducting a thorough evaluation of the company’s meat, the agency concluded it’s safe for consumers to eat. Besides meeting microbiological and purity standards (the press release notes that cultured chicken’s microbiological levels are “significantly cleaner” than conventional chicken), the evaluation found that Good Meat’s chicken contains “high protein content, a well-balanced amino acid profile, and is a rich source of minerals.”
Good Meat isn’t the first company to receive this approval in the US. Its competitor Upside Foods got a No Questions letter for its cultured chicken last November. Their 53,000-square-foot production center in the Bay Area will eventually be able to produce more than 400,000 pounds of meat, poultry, and seafood per year. Before becoming available in grocery stores, Upside’s chicken will be introduced to consumers in restaurants, starting with an upscale restaurant in San Francisco whose chef is Michelin-starred.
Similarly, Good Meat plans to launch its cultured chicken at a Washington DC restaurant owned by celebrity chef José Andrés. Before that can happen, though, the company has to work with the US Department of Agriculture to receive additional approvals for its production facilities and its product.
The company is building a demonstration plant in Singapore, and announced plans last year to build a large-scale facility in the US with an annual production capacity of 30 million pounds of meat (which means it will be bigger than the Believer Meats plant in North Carolina).
Good Meat will have its work cut out for it, as there are more than 80 other companies vying for a slice of the lab-grown meat market, which is projected to reach a value of $12.7 billion by 2030. Given that all of its competitors will have to go through the FDA and USDA approvals process, though, Good Meat has a leg up.
Image Credit: Good Meat
If computer chips make the modern world go around, then Nvidia and TSMC are flywheels keeping it spinning. It’s worth paying attention when the former says they’ve made a chipmaking breakthrough, and the latter confirms they’re about to put it into practice.
At Nvidia’s GTC developer conference this week, CEO Jensen Huang said Nvidia has developed software to make a chipmaking step, called inverse lithography, over 40 times faster. A process that usually takes weeks can now be completed overnight, and instead of requiring some 40,000 CPU servers and 35 megawatts of power, it should only need 500 Nvidia DGX H100 GPU-based systems and 5 megawatts.
“With cuLitho, TSMC can reduce prototype cycle time, increase throughput and reduce the carbon footprint of their manufacturing, and prepare for 2nm and beyond,” he said.
Nvidia partnered with some of the biggest names in the industry on the work. TSMC, the largest chip foundry in the world, plans to qualify the approach in production this summer. Meanwhile, chip designer, Synopsis, and equipment maker, ASML, said in a press release they will integrate cuLitho into their chip design and lithography software.
To fabricate a modern computer chip, makers shine ultraviolet light through intricate “stencils” to etch billions of patterns—like wires and transistors—onto smooth silicon wafers at near-atomic resolutions. This step, called photolithography, is how every new chip design, from Nvidia to Apple to Intel, is manifested physically in silicon.
The machines that make it happen, built by ASML, cost hundreds of millions of dollars and can produce near-flawless works of nanoscale art on chips. The end product, an example of which is humming away near your fingertips as you read this, is probably the most complex commodity in history. (TSMC churns out a quintillion transistors every six months—for Apple alone.)
To make more powerful chips, with ever-more, ever-smaller transistors, engineers have had to get creative.
Remember that stencil mentioned above? It’s the weirdest stencil you’ve ever seen. Today’s transistors are smaller than the wavelength of light used to etch them. Chipmakers have to use some extremely clever tricks to design stencils—or technically, photomasks—that can bend light into interference patterns whose features are smaller than the light’s wavelength and perfectly match the chip’s design.
Whereas photomasks once had a more one-to-one shape—a rectangle projected a rectangle—they’ve necessarily become more and more complicated over the years. The most advanced masks these days are more like mandalas than simple polygons.
To design these advanced photomask patterns, engineers reverse the process.
They start with the design they want, then stuff it through a wicked mess of equations describing the physics involved to design a suitable pattern. This step is called inverse lithography, and as the gap between light wavelength and feature size has increased, it’s become increasingly crucial to the whole process. But as the complexity of photomasks increases, so too does the computing power, time, and cost required to design them.
“Computational lithography is the largest computation workload in chip design and manufacturing, consuming tens of billions of CPU hours annually,” Huang said. “Massive data centers run 24/7 to create reticles used in lithography systems.”
In the broader category of computational lithography—the methods used to design photomasks—inverse lithography is one of the newer, more advanced approaches. Its advantages include greater depth of field and resolution and should benefit the entire chip, but due its heavy computational lift, it’s currently only used sparingly.
Nvidia aims to reduce that lift by making the computation more amenable to graphics processing units, or GPUs. These powerful chips are used for tasks with lots of simple computations that can be completed in parallel, like video games and machine learning. So it isn’t just about running existing processes on GPUs, which only yields a modest improvement, but modifying those processes specifically for GPUs.
That’s what the new software, cuLitho, is designed to do. The product, developed over the last four years, is a library of algorithms for the basic operations used in inverse lithography. By breaking inverse lithography down into these smaller, more repetitive computations, the whole process can now be split and parallelized on GPUs. And that, according to Nvidia, significantly speeds everything up.
“If [inverse lithography] was sped up 40x, would many more people and companies use full-chip ILT on many more layers? I am sure of it,” said Vivek Singh, VP of Nvidia’s Advanced Technology Group, in a talk at GTC.
With a speedier, less computationally hungry process, makers can more rapidly iterate on experimental designs, tweak existing designs, make more photomasks per day, and generally, expand the use of inverse lithography to more of the chip, he said.
This last detail is critical. Wider use of inverse lithography should reduce print errors by sharpening the projected image—meaning chipmakers can churn out more working chips per silicon wafer—and be precise enough to make features at 2 nanometers and beyond.
It turns out making better chips isn’t all about the hardware. Software improvements, like cuLitho or the increased use of machine learning in design, can have a big impact too.
Image Credit: Nvidia
OpenAI Connects ChatGPT to the Internet
Kyle Wiggers | TechCrunch
“[This week, OpenAI] launched plugins for ChatGPT, which extend the bot’s functionality by granting it access to third-party knowledge sources and databases, including the web. Easily the most intriguing plugin is OpenAI’s first-party web-browsing plugin, which allows ChatGPT to draw data from around the web to answer the various questions posed to it.”
Nvidia Speeds Key Chipmaking Computation by 40x
Samuel K. Moore | IEEE Spectrum
“Called inverse lithography, it’s a key tool that allows chipmakers to print nanometer-scale features using light with a longer wavelength than the size of those features. Inverse lithography’s use has been limited by the massive size of the needed computation. Nvidia’s answer, cuLitho, is a set of algorithms designed for use with GPUs, turns what has been two weeks of work into an overnight job.”
Epic’s New Motion-Capture Animation Tech Has to Be Seen to Be Believed
Kyle Orland | Ars Technica
“Epic’s upcoming MetaHuman facial animation tool looks set to revolutionize [the]…labor- and time-intensive workflow [of motion-capture]. In an impressive demonstration at Wednesday’s State of Unreal stage presentation, Epic showed off the new machine-learning-powered system, which needed just a few minutes to generate impressively real, uncanny-valley-leaping facial animation from a simple head-on video taken on an iPhone.”
United to Fly Electric Air Taxis to O’Hare Beginning in 2025
Stefano Esposito | Chicago Sun Times
“The trip between O’Hare and the Illinois Medical District is expected to take about 10 minutes, according to California-based Archer Aviation, which is partnering with United Airlines. …An Archer spokesman said they hope to make the fare competitive with Uber Black, a ride-hailing service that provides luxury vehicles and top-rated drivers to customers. On Thursday afternoon, an Uber Black ride from for Vertiport to O’Hare was $101.”
These New Tools Let You See for Yourself How Biased AI Image Models Are
Melissa Heikkilä | MIT Technology Review
“Popular AI image-generating systems notoriously tend to amplify harmful biases and stereotypes. But just how big a problem is it? You can now see for yourself using interactive new online tools. (Spoiler alert: it’s big.) The tools, built by researchers at AI startup Hugging Face and Leipzig University and detailed in a non-peer-reviewed paper, allow people to examine biases in three popular AI image-generating models: DALL-E 2 and the two recent versions of Stable Diffusion.”
BMW’s New Factory Doesn’t Exist in Real Life, but It Will Still Change the Car Industry
Jesus Diaz | Fast Company
“Before construction on [a new car] factory begins, thousands of engineers draw millions of CAD drawings and meet for thousands of hours. Worse yet, they know that no amount of planning will prevent a long list of bugs once the factory finally opens, which can result in millions of dollars lost every day until the bugs are resolved. At least, that’s how it used to work. This is all about to change thanks to the world’s first virtual factory, a perfect digital twin of BMW’s future 400-hectare plant in Debrecen, Hungary, which will reportedly produce around 150,000 vehicles every year when it opens in 2025.”
Fusion Power Is Coming Back Into Fashion
Editorial Staff | The Economist
“[Forty two companies] think they can succeed, where others failed, in taking fusion from the lab to the grid—and do so with machines far smaller and cheaper than the latest intergovernmental behemoth, ITER, now being built in the south of France at a cost estimated by America’s energy department to be $65bn. In some cases that optimism is based on the use of technologies and materials not available in the past; in others, on simpler designs.”
Plastic Paving: Egyptian Startup Turns Millions of Bags Into Tiles
Editorial Staff | Reuters
“An Egyptian startup is aiming to turn more than 5 billion plastic bags into tiles tougher than cement as it tackles the twin problems of tons of waste entering the Mediterranean Sea and high levels of building sector emissions. ‘So far, we have recycled more than 5 million plastic bags, but this is just the beginning,’ TileGreen co-founder Khaled Raafat told Reuters. ‘We aim that by 2025, we will have recycled more than 5 billion plastic bags.’ ”
Image Credit: BoliviaInteligente / Unsplash
There’s a common perception that artificial intelligence (AI) will help streamline our work. There are even fears that it could wipe out the need for some jobs altogether.
But in a study of science laboratories I carried out with three colleagues at the University of Manchester, the introduction of automated processes that aim to simplify work—and free people’s time—can also make that work more complex, generating new tasks that many workers might perceive as mundane.
In the study, published in Research Policy, we looked at the work of scientists in a field called synthetic biology, or synbio for short. Synbio is concerned with redesigning organisms to have new abilities. It is involved in growing meat in the lab, in new ways of producing fertilizers, and in the discovery of new drugs.
Synbio experiments rely on advanced robotic platforms to repetitively move a large number of samples. They also use machine learning to analyze the results of large-scale experiments.
These, in turn, generate large amounts of digital data. This process is known as “digitalization,” where digital technologies are used to transform traditional methods and ways of working.
Some of the key objectives of automating and digitalizing scientific processes are to scale up the science that can be done while saving researchers time to focus on what they would consider more “valuable” work.
However, in our study, scientists were not released from repetitive, manual, or boring tasks as one might expect. Instead, the use of robotic platforms amplified and diversified the kinds of tasks researchers had to perform. There are several reasons for this.
Among them is the fact that the number of hypotheses (the scientific term for a testable explanation for some observed phenomenon) and experiments that needed to be performed increased. With automated methods, the possibilities are amplified.
Scientists said it allowed them to evaluate a greater number of hypotheses, along with the number of ways that scientists could make subtle changes to the experimental set-up. This had the effect of boosting the volume of data that needed checking, standardizing, and sharing.
Also, robots needed to be “trained” in performing experiments previously carried out manually. Humans, too, needed to develop new skills for preparing, repairing, and supervising robots. This was done to ensure there were no errors in the scientific process.
Scientific work is often judged on output such as peer-reviewed publications and grants. However, the time taken to clean, troubleshoot, and supervise automated systems competes with the tasks traditionally rewarded in science. These less valued tasks may also be largely invisible—particularly because managers are the ones who would be unaware of mundane work due to not spending as much time in the lab.
The synbio scientists carrying out these responsibilities were not better paid or more autonomous than their managers. They also assessed their own workload as being higher than those above them in the job hierarchy.
It’s possible these lessons might apply to other areas of work too. ChatGPT is an AI-powered chatbot that “learns” from information available on the web. When prompted by questions from online users, the chatbot offers answers that appear well-crafted and convincing.
According to Time magazine, in order for ChatGPT to avoid returning answers that were racist, sexist, or offensive in other ways, workers in Kenya were hired to filter toxic content delivered by the bot.
There are many often invisible work practices needed for the development and maintenance of digital infrastructure. This phenomenon could be described as a “digitalization paradox.” It challenges the assumption that everyone involved or affected by digitalization becomes more productive or has more free time when parts of their workflow are automated.
Concerns over a decline in productivity are a key motivation behind organizational and political efforts to automate and digitalize everyday work. But we should not take promises of gains in productivity at face value.
Instead, we should challenge the ways we measure productivity by considering the invisible types of tasks humans can accomplish, beyond the more visible work that is usually rewarded.
We also need to consider how to design and manage these processes so that technology can more positively add to human capabilities.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Image Credit: Gerd Altmann from Pixabay
Reducing the cost of space launches will be critical if we want humanity to have a more permanent presence beyond orbit. The partially successful launch of the first 3D-printed rocket could be a significant step in that direction.
Getting stuff into space is dramatically cheaper than it used to be thanks to a wave of innovation in the private space industry led by SpaceX. More affordable launches have brought on a rapid expansion in access to space and made a host of new space-based applications feasible. But costs are still a major barrier.
That’s largely because rockets are incredibly expensive and difficult to build. A promising way round this is to use 3D printing to simplify the design and manufacturing process. SpaceX has experimented with the idea for years, and the engines on Rocket Lab’s Electron launch vehicle are almost entirely 3D-printed.
But one company wants to take things even further. Relativity Space has built one of the largest metal 3D printers in the world and uses it to fabricate almost all of its Terran 1 rocket. The rocket blasted off for the first time yesterday, and while the launch vehicle didn’t quite make orbit, it survived max-q, or the part of flight when the rocket is subjected to maximum mechanical stress.
“Today is a huge win, with many historic firsts,” the company said in a tweet following the launch. “We successfully made it through max-q, the highest stress state on our printed structures. This is the biggest proof point for our novel additive manufacturing approach.”
This was the company’s third bite at the cherry after two previous launches were called off earlier in the month. The rocket lifted off from a launchpad at the US Space Force’s launch facility in Cape Canaveral, Florida at 8:25 pm (EST) and flew for about three minutes.
Shortly after making it through max-q and the successful separation of the second stage from the booster, the rocket’s engine cut out due to what the company cryptically referred to as “an anomaly,” though it promised to provide updates once flight data has been analyzed.
While that meant Terran 1 didn’t make it into orbit, the launch is nonetheless likely to be seen as a success. It’s fairly common for the first launch of a new rocket to go awry—Space X’s first three launches failed—so getting off the launch pad and passing key milestones like max-q and first stage separation are significant achievements.
This is particularly important for Relativity Space, which is taking a radically different approach to manufacturing its rockets compared to competitors. Prior to the launch, cofounder Tim Ellis said the company’s main goal was to prove the structural integrity of their 3D-printed design.
“We have already proven on the ground what we hope to prove in-flight—that when dynamic pressures and stresses on the vehicle are highest, 3D printed structures can withstand these forces,” he said in a tweet. “This will essentially prove the viability of using additive manufacturing tech to produce products that fly.”
There is a lot that is novel about Relativity’s design. At present, roughly 85 percent of the structure by mass is 3D-printed, but the company hopes to push that to 95 percent in future iterations. This has allowed Relativity to use 100 times fewer parts than traditional rockets and go from raw materials to a finished product in just 60 days.
The engines also run on a mixture of liquid methane and liquid oxygen, which is the same technology SpaceX is pursuing for its massive Starship rocket. This fuel mix is seen as the most promising for Mars exploration as it can be produced on the red planet itself, eliminating the need to carry fuel for the return journey.
But while the 110-foot-tall Terran 1 can carry up to 2,756 pounds to low-Earth orbit, and Relativity is selling rides on the rocket for around $12 million, it is really a test bed for a more advanced rocket. That rocket, the Terran R, will be 216 feet tall and able to carry 44,000 pounds when it makes it onto the launchpad as early as 2024.
Relativity isn’t the only company working hard to bring 3D printing to the space industry.
California startup, Launcher, has created a satellite platform called Orbiter that’s powered by 3D-printed rocket engines, and Colorado-based Ursa Major is 3D printing rocket engines it hopes others will use in their vehicles. At the same time, UK-based Orbex is using metal 3D printers from German manufacturer EOS to manufacture entire rockets.
Now that 3D-printed rockets have passed their first true test and made it into space, don’t be surprised to see more companies following in the footsteps of these early pioneers.
Image Credit: Relativity Space
The hype around artificial intelligence has been building for years, and you could say it reached a crescendo with OpenAI’s recent release of ChatGPT (and now GPT-4). It only took two months for ChatGPT to reach 100 million users, making it the fastest-growing consumer application in history (it took Instagram two and a half years to gain the same user base, and TikTok nine months).
In Ian Beacraft’s opinion, we’re in an AI hype bubble, way above the top of the peak of inflated expectations on the Gartner Hype Cycle. But it may be justified, because the AI tools we’re seeing really do have the power to overhaul the way we work, learn, and create value.
Beacraft is the founder of the strategic foresight agency Signal & Cipher and co-owner of a production studio that designs virtual worlds. In a talk at South by Southwest last week, he shared his predictions of how AI will shape society in the years and decades to come.
Beacraft pointed out that with the Industrial Revolution we were able to take skills of human labor and amplify them far beyond what the human body is capable of. “Now we’re doing the same thing with knowledge work,” he said. “We’re able to do so much more, put so much more power behind it.” The Industrial Revolution mechanized skills, and today we’re digitizing skills. Digitized skills are programmable, composable, and upgradeable—and AI is taking it all to another level.
Say you want to write a novel in the style of a specific writer. You could prompt ChatGPT to do so, be it by the sentence, paragraph, or chapter, then tweak the language to your liking (whether that’s cheating or some form of plagiarism is another issue, and a pretty significant one); you’re programming the algorithm to extract years worth of study and knowledge—years that you don’t have to put in. Composable means you can stack skills on top of each other, and upgradeable means anytime anytime an AI gets an upgrade, so do you. “You didn’t have to go back to school for it, but all of a sudden you have new skills that came from the upgrade,” Beacraft said.
Due to these features, he believes AI is going to turn us all into creative generalists. Right now we’re told to specialize from an early age and build expertise in one area—but what happens once AI can quickly outpace us in any domain? Will it still make sense to become an expert in a single field?
“Those who have expertise and depth in several domains, and interest and passion and curiosity across a broad swathe—those are the people who are going to dominate the next era,” Beacraft said. “When you have an understanding of how something works, you can now produce for it. You don’t have to have expertise in all the different layers to make that happen. You can know how the general territory or field operate, then have machines abstract the rest of the skills.”
For example, a graphic designer who draws a comic book could use AI-powered design tools to turn that comic book into a 3D production, and he doesn’t have to know 3D modeling, camera movement, blending, or motion capture; AI now enables just one person to perform all of the virtual production elements. “This wouldn’t have been possible a couple years ago, and now—with some effort—it is,” Beacraft said. The video below was created entirely by one person using generative AI, including the imagery, sound, motion, and talk track.
Generative AI tools are also starting to learn how to use other tools themselves, and they’re only going to get better at it. ChatGPT, for example, isn’t very good at hard science, but it could pass those kinds of questions off to something like WolframAlpha and include the tool’s answer in its reply.
This is not only going to change our work, Beacraft said, it’s going to change our relationship with work. Right now, organizations expect incremental employee improvement in narrowly-defined roles. Job titles like designer, accountant, or project manager have key performance indicators that typically improve two to three percent per year. “But if employees only grow incrementally, how can organizations expect exponential growth?” Beacraft asked.
AI will take our traditional job roles and make them horizontal, giving us the ability to flex in any direction. As a result, we’ll have just-in-time skills and expertise on demand. “We will not lose our jobs, we will lose our job descriptions,” Beacraft said. “When organizations have teams of people working horizontally, all that new capability is net new, not incremental—and all of a sudden you have exponential growth.”
That growth could do the opposite of what the predominant narrative tells us: that AI, robotics, and automation will take over various kinds of work and do away with our jobs. But AI could very well end up creating more work for us.
For example, teams of scientists using AI to help them run experiments more efficiently could increase the number of experiments they perform—but then they have more results, more data to analyze, and more work sifting through all this information to ultimately draw a conclusion or find what they’re looking for. But hey—AI is getting good at handling extra administrative work, too.
We may be in an AI hype bubble, but this technology is reaching more people than it ever has before. While there are certainly nefarious uses for generative AI—just look at all the students trying to turn in essays written by ChatGPT, or how deepfakes are becoming harder to pinpoint—there are as many or more productive uses that will impact society, the economy, and our lives in positive ways.
“It’s not just about data and information, it’s about how these AIs can help us shape the world,” Beacraft said. “It’s about how we project what we want to create onto the world around us.”
The race to solve every protein structure just welcomed another tech giant: Meta AI.
A research offshoot of Meta, known for Facebook and Instagram, the team came onto the protein shape prediction scene with an ambitious goal: to decipher the “dark matter” of the protein universe. Often found in bacteria, viruses, and other microorganisms, these proteins lounge in our everyday environments but are complete mysteries to science.
“These are the structures we know the least about. These are incredibly mysterious proteins. I think they offer the potential for great insight into biology,” said senior author Dr. Alexander Rives to Nature.
In other words, they’re a treasure trove of inspiration for biotechnology. Hidden in their secretive shapes are keys for designing efficient biofuels, antibiotics, enzymes, or even entirely new organisms. In turn, the data from protein predictions could further train AI models.
At the heart of Meta’s new AI, dubbed ESMFold, is a large language model. It might sound familiar. These machine learning algorithms have taken the world by storm with the rockstar chatbot ChatGPT. Known for its ability to generate beautiful essays, poems, and lyrics with simple prompts, ChatGPT—and the recently-launched GPT-4—are trained with millions of publicly-available texts. Eventually the AI learns to predict letters, words, and even write entire paragraphs and, in the case of Bing’s similar chatbot, hold conversations that sometimes turn slightly unnerving.
The new study, published in Science, bridges the AI model with biology. Proteins are made of 20 “letters.” Thanks to evolution, the sequence of letters help generate their ultimate shapes. If large language models can easily construe the 26 letters of the English alphabet into coherent messages, why can’t they also work for proteins?
Spoiler: they do. ESM-2 blasted through roughly 600 million protein structure predictions in just two weeks using 2,000 graphic processing units (GPUs). Compared to previous attempts, the AI made the process up to 60 times faster. The authors put every structure into the ESM Metagenomic Atlas, which you can explore here.
To Dr. Alfonso Valencia at the Barcelona National Supercomputing Center (BCS), who was not involved in the work, the beauty of using large language systems is a “conceptual simplicity.” With further development, the AI can predict “the structure of non-natural proteins, expanding the known universe beyond what evolutionary processes have explored.”
ESMFold follows a simple guideline: sequence predicts structure.
Let’s backtrack. Proteins are made from 20 amino acids—each one a “letter”—and strung up like spiky beads on a string. Our cells then shape them up into delicate features: some look like rumpled bed sheets, others like a swirly candy cane or loose ribbons. The proteins can then grab onto each other to form a multiplex—for example, a tunnel that crosses the brain cell membrane that controls its actions, and in turn controls how we think and remember.
Scientists have long known that amino acid letters help shape the final structure of a protein. Similar to letters or characters in a language, only certain ones when strung together make sense. In the case of proteins, these sequences make them functional.
“The biological properties of a protein constrain the mutations to its sequence that are selected through evolution,” the authors said.
Similar to how different letters in the alphabet converge to create words, sentences, and paragraphs without sounding like complete gibberish, the protein letters do the same. There is an “evolutionary dictionary” of sorts that helps string up amino acids into structures the body can comprehend.
“The logic of the succession of amino acids in known proteins is the result of an evolutionary process that has led them to have the specific structure with which they perform a particular function,” said Valencia.
Life’s relatively limited dictionary is great news for large language models.
These AI models scour readily available texts to learn and build up predictions of the next word. The end result, as seen in GPT-3 and ChatGPT, are strikingly natural conversations and fantastical artistic images.
Meta AI used the same concept, but rewrote the playbook for protein structure predictions. Rather than feeding the algorithm with texts, they gave the program sequences of known proteins.
The AI model—called a transformer protein language model—learned the general architecture of proteins using up to 15 billion “settings.” It saw roughly 65 million different protein sequences overall.
In their next step the team hid certain letters from the AI, prompting it to fill in the blanks. In what amounts to autocomplete, the program eventually learned how different amino acids connect to (or repel) each other. In the end, the AI formed an intuitive understanding of evolutionary protein sequences—and how they work together to make functional proteins.
As a proof of concept, the team tested ESMFold using two well-known test sets. One, CAMEO, involved nearly 200 structures; the other, CASP14, has 51 publicly-released protein shapes.
Overall, the AI “provides state-of-the-art structure prediction accuracy,” the team said, “matching AlphaFold2 performance on more than half the proteins.” It also reliably tackled large protein complexes—for example, the channels on neurons that control their actions.
The team then took their AI a step further, venturing into the world of metagenomics.
Metagenomes are what they sound like: a hodgepodge of DNA material. Normally these come from environmental sources such as the dirt under your feet, seawater, or even normally inhospitable thermal vents. Most of the microbes can’t be artificially grown in labs, yet some have superpowers such as resisting volcanic-level heat, making them a biological dark matter yet to be explored.
At the time the paper was published, the AI had predicted over 600 million of these proteins. The count is now up to over 700 million with the latest release. The predictions came fast and furious in roughly two weeks. In contrast, previous modeling attempts took up to 10 minutes for just a single protein.
Roughly a third of the protein predictions were of high confidence, with enough detail to zoom into the atomic-level scale. Because the protein predictions were based solely on their sequences, millions of “aliens” popped up—structures unlike anything in established databases or those previously tested.
“It’s interesting that more than 10 percent of the predictions are for proteins that bear no resemblance to other known proteins,” said Valencia. It might be due to the magic of language models, which are far more flexible at exploring—and potentially generating—previously unheard of sequences that make up functional proteins. “This is a new space for the design of proteins with new sequences and biochemical properties with applications in biotechnology and biomedicine,” he said.
As an example, ESMFold could potentially help suss out the consequences of single-letter changes in a protein. Called point mutations, these seemingly benign edits wreak havoc in the body, causing devastating metabolic syndromes, sickle cell anemia, and cancer. A lean, mean, and relatively simple AI brings results to the average biomedical research lab, while scaling up protein shape predictions thanks to the AI’s speed.
Biomedicine aside, another fascinating idea is that proteins may help train large language models in a way texts can’t. As Valencia explained, “On the one hand, protein sequences are more abundant than texts, have more defined sizes, and a higher degree of variability. On the other hand, proteins have a strong internal ‘meaning’—that is, a strong relationship between sequence and structure, a meaning or coherence that is much more diffuse in texts,” bridging the two fields into a virtuous feedback loop.
Image Credit: Meta AI
Solar power is going to play a major role in combating climate change, but it requires huge amounts of land. Floating solar panels on top of reservoirs could provide up to a third of the world’s electricity without taking up extra space, and also save trillions of gallons of water from evaporating.
So called “floating photovoltaic” systems have a lot going for them. The surface of reservoirs can’t be used for much else, so it’s comparatively cheap real estate, and it also frees up land for other important purposes. And because these bodies of water are designed to service major urban centers, they’re normally close to where the power will be needed, making electricity distribution simpler.
By shielding the water from the sun, floating solar panels can also significantly reduce evaporation, which can be a major concern in the hot dry climates where solar works best. And what evaporation does occur can actually help to cool the panels, which operate more efficiently at lower temperatures and therefore squeeze out extra power.
Just how promising the approach could be had remained unclear, as so far analyses had been limited to individual countries or regions. A new study in Nature Sustainability has now provided a comprehensive assessment of the global potential of floating solar power, finding that it could provide between a fifth and half of the world’s electricity needs while saving 26 trillion gallons of water from evaporating.
The new research was made possible by combining several databases mapping reservoirs around the world. This allowed the researchers to identify a total of 114,555 water bodies with a total area of 556,111 square kilometers (214,716 square miles).
They then used a model developed at the US Department of Energy’s Sandia National Laboratory that can simulate solar panel performance in different climatic conditions. Finally, they used regional hydrology simulations to predict how much the solar panels would reduce evaporation based on local climate data.
In their baseline study, the researchers assumed that solar panels would only cover 30 percent of a reservoir’s surface, or 30 square kilometers (11.6 square miles), depending on which is lower. This was done to take into account the practical difficulties of building larger arrays and also the potential ecological impact of completely covering up the body of water.
Given these limitations, the researchers calculated that the global generating potential for floating solar panels was a massive 9,434 terawatt-hours a year, which is roughly 40 percent of the 22,848 terawatt-hours the world consumes yearly, according to the International Energy Agency’s latest figures.
If the total coverage was limited to a much more reasonable 10 percent, the researchers found floating solar power could still generate as much as 4,356 terawatt-hours a year. And if the largest reservoirs were allowed to have up to 50 square kilometers (19 square miles) of panels then the total capacity rose to 11,012 terawatt-hours, almost half of global electricity needs.
The authors note that this capacity isn’t evenly distributed, and some countries stand to gain more than others. With more than 25,000 reservoirs, the US has the most to gain and could generate 1,911 terawatt-hours a year, almost half its total consumption. China, India, and Brazil could also source a significant amount of their power this way.
But most interestingly, the analysis showed that as many as 6,256 cities could theoretically meet all of their electricity demands with floating solar power. Most have a population below 50,000, but as many as 150 are cities with more than a million people.
It’s important to note that this study was simply assessing the potential of the idea. Floating solar panels have been around for some time, but they are more expensive to deploy than land-based panels, and there are significant concerns about what kind of impact blocking out sunlight could have on reservoir ecosystems.
But given the need to rapidly scale up renewable energy generation, and the scarcity of land for large solar installations, turning our reservoirs into power stations could prove to be a smart idea.
In 2020, artificial intelligence company OpenAI stunned the tech world with its GPT-3 machine learning algorithm. After ingesting a broad slice of the internet, GPT-3 could generate writing that was hard to distinguish from text authored by a person, do basic math, write code, and even whip up simple web pages.
OpenAI followed up GPT-3 with more specialized algorithms that could seed new products, like an AI called Codex to help developers write code and the wildly popular (and controversial) image-generator DALL-E 2. Then late last year, the company upgraded GPT-3 and dropped a viral chatbot called ChatGPT—by far, its biggest hit yet.
Now, a rush of competitors is battling it out in the nascent generative AI space, from new startups flush with cash to venerable tech giants like Google. Billions of dollars are flowing into the industry, including a $10-billion follow-up investment by Microsoft into OpenAI.
This week, after months of rather over-the-top speculation, OpenAI’s GPT-3 sequel, GPT-4, officially launched. In a blog post, interviews, and two reports (here and here), OpenAI said GPT-4 is better than GPT-3 in nearly every way.
GPT-4 is multimodal, which is a fancy way of saying it was trained on both images and text and can identify, describe, and riff on what’s in an image using natural language. OpenAI said the algorithm’s output is higher quality, more accurate, and less prone to bizarre or toxic outbursts than prior versions. It also outperformed the upgraded GPT-3 (called GPT 3.5) on a slew of standardized tests, placing among the top 10 percent of human test-takers on the bar licensing exam for lawyers and scoring either a 4 or a 5 on 13 out of 15 college-level advanced placement (AP) exams for high school students.
To show off its multimodal abilities—which have yet to be offered more widely as the company evaluates them for misuse—OpenAI president Greg Brockman sketched a schematic of a website on a pad of paper during a developer demo. He took a photo and asked GPT-4 to create a webpage from the image. In seconds, the algorithm generated and implemented code for a working website. In another example, described by The New York Times, the algorithm suggested meals based on an image of food in a refrigerator.
The company also outlined its work to reduce risk inherent in models like GPT-4. Notably, the raw algorithm was complete last August. OpenAI spent eight months working to improve the model and rein in its excesses.
Much of this work was accomplished by teams of experts poking and prodding the algorithm and giving feedback, which was then used to refine the model with reinforcement learning. The version launched this week is an improvement on the raw version from last August, but OpenAI admits it still exhibits known weaknesses of large language models, including algorithmic bias and an unreliable grasp of the facts.
By this account, GPT-4 is a big improvement technically and makes progress mitigating, but not solving, familiar risks. In contrast to prior releases, however, we’ll largely have to take OpenAI’s word for it. Citing an increasingly “competitive landscape and the safety implications of large-scale models like GPT-4,” the company opted to withhold specifics about how GPT-4 was made, including model size and architecture, computing resources used in training, what was included in its training dataset, and how it was trained.
Ilya Sutskever, chief technology officer and cofounder at OpenAI, told The Verge “it took pretty much all of OpenAI working together for a very long time to produce this thing” and lots of other companies “would like to do the same thing.” He went on to suggest that as the models grow more powerful, the potential for abuse and harm makes open-sourcing them a dangerous proposition. But this is hotly debated among experts in the field, and some pointed out the decision to withhold so much runs counter to OpenAI’s stated values when it was founded as a nonprofit. (OpenAI reorganized as a capped-profit company in 2019.)
The algorithm’s full capabilities and drawbacks may not become apparent until access widens further and more people test (and stress) it out. Before reining it in, Microsoft’s Bing chatbot caused an uproar as users pushed it into bizarre, unsettling exchanges.
Overall, the technology is quite impressive—like its predecessors—but also, despite the hype, more iterative than GPT-3. With the exception of its new image-analyzing skills, most abilities highlighted by OpenAI are improvements and refinements of older algorithms. Not even access to GPT-4 is novel. Microsoft revealed this week that it secretly used GPT-4 to power its Bing chatbot, which had recorded some 45 million chats as of March 8.
While GPT-4 may not to be the step change some predicted, the scale of its deployment almost certainly will be.
GPT-3 was a stunning research algorithm that wowed tech geeks and made headlines; GPT-4 is a far more polished algorithm that’s about to be rolled out to millions of people in familiar settings like search bars, Word docs, and LinkedIn profiles.
In addition to its Bing chatbot, Microsoft announced plans to offer services powered by GPT-4 in LinkedIn Premium and Office 365. These will be limited rollouts at first, but as each iteration is refined in response to feedback, Microsoft could offer them to the hundreds of millions of people using their products. (Earlier this year, the free version of ChatGPT hit 100 million users faster than any app in history.)
It’s not only Microsoft layering generative AI into widely used software.
Google said this week it plans to weave generative algorithms into its own productivity software—like Gmail and Google Docs, Slides, and Sheets—and will offer developers API access to PaLM, a GPT-4 competitor, so they can build their own apps on top of it. Other models are coming too. Facebook recently gave researchers access to its open-source LLaMa model—it was later leaked online—while a Google-backed startup, Anthropic, and China’s tech giant Baidu rolled out their own chatbots, Claude and Ernie, this week.
As models like GPT-4 make their way into products, they can be updated behind the scenes at will. OpenAI and Microsoft continually tweaked ChatGPT and Bing as feedback rolled in. ChatGPT Plus users (a $20/month subscription) were granted access to GPT-4 at launch.
It’s easy to imagine GPT-5 and other future models slotting into the ecosystem being built now as simply, and invisibly, as a smartphone operating system that upgrades overnight.
If there’s anything we’ve learned in recent years, it’s that scale reveals all.
It’s hard to predict how new tech will succeed or fail until it makes contact with a broad slice of society. The next months may bring more examples of algorithms revealing new abilities and breaking or being broken, as their makers scramble to keep pace.
“Safety is not a binary thing; it is a process,” Sutskever told MIT Technology Review. “Things get complicated any time you reach a level of new capabilities. A lot of these capabilities are now quite well understood, but I’m sure that some will still be surprising.”
Longer term, when the novelty wears off, bigger questions may loom.
The industry is throwing spaghetti at the wall to see what sticks. But it’s not clear generative AI is useful—or appropriate—in every instance. Chatbots in search, for example, may not outperform older approaches until they’ve proven to be far more reliable than they are today. And the cost of running generative AI, particularly at scale, is daunting. Can companies keep expenses under control, and will users find products compelling enough to vindicate the cost?
Also, the fact that GPT-4 makes progress on but hasn’t solved the best-known weaknesses of these models should give us pause. Some prominent AI experts believe these shortcomings are inherent to the current deep learning approach and won’t be solved without fundamental breakthroughs.
Factual missteps and biased or toxic responses in a fraction of interactions are less impactful when numbers are small. But on a scale of hundreds of millions or more, even less than a percent equates to a big number.
“LLMs are best used when the errors and hallucinations are not high impact,” Matthew Lodge, the CEO of Diffblue, recently told IEEE Spectrum. Indeed, companies are appending disclaimers warning users not to rely on them too much—like keeping your hands on the steering wheel of that Tesla.
It’s clear the industry is eager to keep the experiment going though. And so, hands on the wheel (one hopes), millions of people may soon begin churning out presentation slides, emails, and websites in a jiffy, as the new crop of AI sidekicks arrives in force.
Image Credit: Luke Jones / Unsplash
You Can Now Run a GPT-3-Level AI Model on Your Laptop, Phone, and Raspberry Pi
Benj Edwards | Ars Technica
“On Friday, a software developer named Georgi Gerganov created a tool called “llama.cpp” that can run Meta’s new GPT-3-class AI large language model, LLaMA, locally on a Mac laptop. Soon thereafter, people worked out how to run LLaMA on Windows as well. Then someone showed it running on a Pixel 6 phone, and next came a Raspberry Pi (albeit running very slowly). If this keeps up, we may be looking at a pocket-sized ChatGPT competitor before we know it.”
A Gene Therapy Cure for Sickle Cell Is on the Horizon
Emily Mullin | Wired
“[Evie] Junior…is one of dozens of sickle cell patients in the US and Europe who have received gene therapies in clinical trials—some led by universities, others by biotech companies. Two such therapies, one from Bluebird Bio and the other from CRISPR Therapeutics and Vertex Pharmaceuticals, are the closest to coming to market. The companies are now seeking regulatory approval in the US and Europe. If successful, more patients could soon benefit from these therapies, although access and affordability could limit who gets them.”
This Couple Just Got Married in the Taco Bell Metaverse
Tanya Basu | MIT Technology Review
“The chapel at the company’s Taco Bell Cantina restaurant in Las Vegas has married 800 couples so far. There were copycat virtual weddings, too. ‘Taco Bell saw fans of the brand interact in the metaverse and decided to meet them quite literally where they were,’ a spokesperson said. That meant dancing hot sauce packets, a Taco Bell–themed dance floor, a turban for Mohnot, and the famous bell branding everywhere.”
Inside the Global Race to Turn Water Into Fuel
Max Bearak | The New York Times
“A consortium of energy companies led by BP plans to cover an expanse of land eight times as large as New York City with as many as 1,743 wind turbines, each nearly as tall as the Empire State Building, along with 10 million or so solar panels and more than a thousand miles of access roads to connect them all. But none of the 26 gigawatts of energy the site expects to produce, equivalent to a third of what Australia’s grid currently requires, will go toward public use. Instead, it will be used to manufacture a novel kind of industrial fuel: green hydrogen.”
Has the 3D Printing Revolution Finally Arrived?
Tim Lewis | The Guardian
“i‘What happened 10 years ago, when there was this massive hype, was there was so much nonsense being written: “You’ll print anything with these machines! It’ll take over the world!”‘ says Hague. ‘But it’s now becoming a really mature technology, it’s not an emerging technology really any more. It’s widely implemented by the likes of Rolls-Royce and General Electric, and we work with AstraZeneca, GSK, a whole bunch of different people. Printing things at home was never going to happen, but it’s developed into a multibillion-dollar industry.’i”
AI-Imager Midjourney v5 Stuns With Photorealistic Images—and 5-Fingered Hands
Benj Edwards | Ars Technica
“Midjourney v5 is available now as an alpha test for customers who subscribe to the Midjourney service, which is available through Discord. ‘MJ v5 currently feels to me like finally getting glasses after ignoring bad eyesight for a little bit too long,’ said Julie Wieland, a graphic designer who often shares her Midjourney creations on Twitter. ‘Suddenly you see everything in 4k, it feels weirdly overwhelming but also amazing.’i”
AI-Generated Images From Text Can’t Be Copyrighted, US Government Rules
Kris Holt | Engadget
“That’s according to the US Copyright Office (USCO), which has equated such prompts to a buyer giving directions to a commissioned artist. ‘They identify what the prompter wishes to have depicted, but the machine determines how those instructions are implemented in its output,’ the USCO wrote in new guidance it published to the Federal Register. ‘When an AI technology receives solely a prompt from a human and produces complex written, visual, or musical works in response, the “traditional elements of authorship” are determined and executed by the technology—not the human user,’ the office stated.”
GPT-4 Has the Memory of a Goldfish
Jacob Stern | The Atlantic
“By this point, the many defects of AI-based language models have been analyzed to death—their incorrigible dishonesty, their capacity for bias and bigotry, their lack of common sense. …But large language models have another shortcoming that has so far gotten relatively little attention: their shoddy recall. These multibillion-dollar programs, which require several city blocks’ worth of energy to run, may now be able to code websites, plan vacations, and draft company-wide emails in the style of William Faulkner. But they have the memory of a goldfish.”
Microsoft Lays Off an Ethical AI Team as It Doubles Down on OpenAI
Rebecca Bellan | TechCrunch
“The move calls into question Microsoft’s commitment to ensuring its product design and AI principles are closely intertwined at a time when the company is making its controversial AI tools available to the mainstream. Microsoft still maintains its Office of Responsible AI (ORA), which sets rules for responsible AI through governance and public policy work. But employees told Platformer that the ethics and society team was responsible for ensuring Microsoft’s responsible AI principles are actually reflected in the design of products that ship.”
It’s Official: No More Crispr Babies—for Now
Grace Browne | Wired
“After several days of experts chewing on the scientific, ethical, and governance issues associated with human genome editing, the [Third International Summit on Human Genome Editing’s] organizing committee put out its closing statement. Heritable human genome editing—editing embryos that are then implanted to establish a pregnancy, which can pass on their edited DNA—’remains unacceptable at this time,’ the committee concluded. ‘Public discussions and policy debates continue and are important for resolving whether this technology should be used.’i”
Image Credit: Kenan Alboshi / Unsplash