utopia logo
Blog / A detour from typical ai discourse

A detour from typical ai discourse

How Google is solving some of the hardest problems in science with Artificial Intelligence

AI
KJ

KJ

Author

Published 28th July, 2025

A detour from typical ai discourse

Current discourse on AI is in a never ending cycle. It constantly revolves around LLMs and whether they're going to take our jobs. Constant talk about 'vibe coding' ruining a generation of developers, while others point to AI writing uni essays and occasionally deleting entire databases (yes, this really happened.)

These concerns, I would agree, are absolutely well-founded, and they certainly deserve our attention. But I also believe this discourse drowns out some of the incredible work that deep learning and AI are doing in the natural sciences; how it provides scientists with the important pieces of information needed to combat diseases. It's being used to tackle the oldest and hardest problems in all of science. AI systems are very good at building applications, but they are also getting very good at understanding the very building blocks of life.

I want to turn away from the debate on how AI is destroying, and look at how it is actually helping.

The science stuff

Every single plant, every animal, is composed of billions of little machines made up of proteins. Proteins are built from instructions in our DNA, which is composed of base pairs that you certainly heard about if you did A-Level Biology: A, C, T, and G.

Each protein starts as a simple, long chain of amino acids. This is its primary structure. We've been able to read these sequences for a long time now, so we can figure out the order of the amino acids. But this simple chain is not how these proteins actually work. From that simple chain, they begin to fold, forming strong bonds like beta-pleated sheets and alpha helices. This is known as its secondary structure. From there, they start to fold into their actual 3D shape (the tertiary structure), and sometimes multiple folded proteins come together to form a final complex structure, known as the quaternary structure.

A protein's function is entirely dependent on its final 3D shape. A protein that fights a specific virus, for example, does so because its shape allows it to latch onto that virus, like a key fitting into a lock. If the shape is wrong, the key won't fit, and the protein doesn't work. Many serious diseases like Alzheimer's and Parkinson's disease happen because proteins fold into the wrong shape.

This brings us to the "protein folding problem," a problem that has had scientists in a chokehold for half a century. Yes, we can identify the 1D structure of amino acids, but predicting the final 3D structure it will fold into is painfully difficult. The reason is that a single chain of amino acids can, in theory, be arranged into a number of different shapes so large that it is easily greater than the number of atoms in the known universe. And yet, our bodies are able to snap them into the correct shape incredibly quickly. Some quick maths: there are 20 standard amino acids. If you have a protein that's, say, 100 amino acids long, the number of possible sequences is 20^100. That's a number with 131 digits, far more than the estimated number of atoms in the universe (10^80). And that's just the sequences, by the way, not even the ways each sequence could fold into different shapes.

For years, the only way to find out a protein's structure was through very slow and expensive methods. These methods could take a scientist their entire PhD. Literal years of effort and hundreds of thousands of pounds, just to map out one protein structure. And for many proteins, the methods just didn't work at all.

This set the stage for someone to come along with a new way of thinking.

DeepMind

In 2010, a man by the name of Demis Hassabis founded a company called DeepMind.

Now, Hassabis is a bit of a prodigy.

At age 13, he became a chess master, co-created the popular game Theme Park when he was just 17, and went on to get a PhD in cognitive neuroscience. With his friends Shane Legg and Mustafa Suleyman, he launched DeepMind as an AI research lab with one clear mission: to "solve intelligence," and then use that intelligence to advance science and benefit humanity.

For its first few years, DeepMind operated in stealth mode, hiring the brightest minds in the field. Then, their big breakthrough happened when they trained an AI to play old Atari games. It sounds trivial, but the reason this was a big deal was because they didn't program the rules themselves; they gave the AI the screen pixels and the score, and it figured out the rest on its own, eventually becoming better than any human.

This is what caught Google's eye. In 2014, they bought the relatively unknown start-up for around £400 million. As part of the deal, DeepMind has remained semi-independent in its London headquarters, keeping focused on research. Since then, they have hit major milestones. One of which was when their program, AlphaGo, beat the world's best Go player in 2016, something people thought was still years away.

AlphaFold

In 2020, DeepMind made huge advancements in solving the protein folding problem. They created AlphaFold. It's not a robot or some sort of physical machine; it's an AI system. They trained it on 170,000 known protein structures, not for memorisation but for it to learn the rules of physics, chemistry, and biology that dictate how a chain of amino acids folds, spotting things even we humans may not see. It learned the grammar of protein folding.

This big reveal came at a competition called CASP. For the first time, a method that relied on computation was predicting protein structures with an accuracy incredibly close to the long, tedious experimental methods. A task that would take a lab years was now being done in hours, sometimes even minutes. One researcher, Professor Andrei Lupas, noted that the system had helped him figure out the structure of a protein that had his lab stuck for a decade.

DeepMind didn't just publish their results, though. In partnership with the European Bioinformatics Institute, they used AlphaFold to predict the structure of nearly every known protein on the planet and put it all online for free in the AlphaFold Protein Structure Database. Since then, it has grown to over 200 million predicted structures, and Hassabis has gone on to win the 2024 Nobel Prize in Chemistry for their work.

Every biologist on the planet is now equipped with a cheat sheet. Now, instead of spending years figuring out the shape of a protein, they can simply look it up.

AlphaGenome

We all have thousands of small variations in our DNA, which lead to mutations and lead to us being unique. For the majority of these mutations, we have no idea what they do. Is a specific mutation harmless, or is it the cause of a patient's disease? For years, we had no way of figuring this out efficiently.

Not only did DeepMind practically solve the protein folding problem, but they also addressed this problem too.

AlphaGenome is an AI that looks at a DNA sequence and determines how a specific change might affect how our genes are controlled, which helps us sort the harmless changes from the ones likely to cause problems. It's very good at looking at the non-coding regions of DNA, where most disease-associated variants are found. In one test, it was able to identify the mechanism behind a type of leukaemia by predicting how a mutation could wrongly activate a cancer-causing gene. Amazing breakthroughs.

I think that a lot of the discourse around LLMs is well-founded, but the problem with this discourse is that people lump all of AI into one category. This technology is becoming incredibly sophisticated, and it's being used to solve some of the most important problems in science.