When John Jumper got the call earlier this month from the Royal Swedish Academy of Sciences that he was winning the Nobel Prize, he almost didn’t answer.

The director of Google DeepMind, the tech giant’s formidable AI lab, recognized the Swedish area code on his phone and froze, unable to believe what was on the precipice of happening. But on October 9, he was awarded the Nobel Prize in Chemistry, alongside DeepMind cofounder Demis Hassabis, for their creation of AlphaFold, an AI model that predicts the structure of proteins, the building blocks of biology, based on their chemical sequence.

“I really didn’t think it was going to happen,” Jumper told Forbes.

The two Google scientists split the award with David Baker, a professor at the University of Washington who used software to invent a new protein. Baker, who founded the university’s Institute of Protein Design in 2012, told Forbes earlier this year that he was shocked to see how much the field has grown in recent years. “It was always this kind of lunatic, fringe thing. Very much out of the mainstream,” he said.

For more than half a century, protein folding had been one of the most vexing and promising problems in modern science: predicting the shape and structure of proteins is key in understanding how they’ll interact with the external environment, paving the way for drug discoveries and the development of new materials. With AlphaFold, DeepMind was able to predict millions of folding patterns using generative AI — exponentially quicker and cheaper than it would take to traditionally perform those computations.

Jumper, who joined DeepMind in 2017, is the youngest Nobel laureate in Chemistry in over 70 years. He chatted with Forbes about winning the award, the AI landscape, and building AlphaFold.

Richard Nieva: First, congrats on the accomplishment. I heard in an interview you recorded right after the announcement that you thought you had a 10% chance of winning. And so, how did you find out you won, and what was going through your head?

John Jumper: So it’s like a 10% chance of winning the lottery, right? The best chance anyone ever gets at the lottery. But still, I was very nervous, and still very much couldn’t sleep the night before that. Well, I had tried to sleep through. My goal was to wake up and find out if I had won one way or another.

But anyway, I was sitting, waiting. I thought the call came about an hour beforehand. And so with about an hour from when the announcement was supposed to be, I just turned to my wife, and I said, “Well, I guess not this year.” And 30 seconds later, I get this Swedish area code pop up on my phone. And I’m not sure I remember it this way, but according to my wife, I just stared at it for a while, until she yelled at me to answer the phone. I really didn’t think it was going to happen.

Nieva: How did you celebrate?

Jumper: I knew that the AlphaFold team was going to get together to watch the announcement live, and they didn’t know yet. So I was basically rushing down to log on. They were all in the office, but I wasn’t yet. So I log on and watch them find out, which was really fun. And then after that, I went into the office, and I think someone had already gone to the place down the street and bought every bottle of sparkling wine they could get. And so we had this impromptu office party. I got in around noon, and it was just, you know, hugging. Just unbelievable that it happened, and great to get to share it with people. And the vast majority of the team is still there.

Nieva: You’ve been at DeepMind since 2017. Obviously now we’re this period into where AI is everywhere, and it’s kind of a part of everything. What’s different about this current moment for AI?

Jumper: I think there’s at least two moments happening at once. And in some ways, it’s very interesting for me personally to see the incredible results on chatbots and image generation really waking the world up to how very powerful these technologies are. But it kind of mirrored, in a lot of ways, what had happened in the scientific community with AlphaFold. And I think what we’re really seeing is that these models are now very effective at hard problems that we don’t know how to solve otherwise. And there’s this moment in chatbots where they can do incredible things — they can write poetry, they can summarize emails — they can do a lot of things that humans could do, but we had no idea how to program computers to do.

And then in science, I think there’s a different moment. It’s really one where there are problems that we couldn’t solve, that no human could do. It’s not really about learning from humans. It’s about learning from experimental data and predicting, for AlphaFold, a year’s worth of experiments in five minutes. And these technologies are becoming powerful at the same time, but it’s almost a separate moment.

Nieva: That’s interesting. You work at a place that develops both of these technologies. How do they merge in your world? Or is that even a goal?

Jumper: I think there’s two answers to it. One answer is that, on a technological basis, they exchange quite a bit. The science of learning from data is improving incredibly. Lessons on chatbots get pulled over into scientific work. Lessons on scientific work can get pulled over into chatbots. And all of this is cross pollinating at the techniques and computer hardware level.

There’s an interesting question of how much they merge, or at least, how much will chatbots influence the scientific side? Of course, we’ve shown some results on, say, using chatbots to search, summarize or extract facts from scientific papers. But we don’t yet know if, for example — and I think Demis has talked about this a bit — whether that will get predictive. When will the first predictive experimental results come from English language technologies or others? We don’t know. And we don’t know if that’s very far or near term. So I think that whether they merge in that way is an interesting question.

Nieva: You released AlphaFold 3 in March, where you integrated a diffusion model into the technology. How are you thinking of integrating other types of foundation models as you advance AlphaFold further?

Jumper: In general, the lessons that have been learned in terms of how you train these models, how you control and how you scale them, will matter in science. But science does have a distinction. Our data is incredibly finite. There isn’t web-scale data talking about what proteins look like. And even if they were, all of that knowledge would really be derived from this about 200,000 structures that we have from the scientific community. So it really is strongly finite data problems. I think you will probably see, as people think more and more about transfer learning, about possibly reasoning techniques and others, maybe this will really start to transfer.

Nieva: When we talk about AI, we often talk about safety and guardrails. What are some of the biggest worries that you have with how people might use AlphaFold?

Jumper: So one thing to say is that we spend a lot of time assessing this before release, and it’s something we want to assess before results. For example, for AlphaFold 2, we talked to something like 30 biosecurity experts and said, “What are the ways in which this could contribute to harm from bad actors? In what ways would it do so?” And the overwhelming consensus was that there weren’t major risks on this. This was low risk, and it was highly beneficial to release. So I think that is something that we are constantly talking about.

There’s a wider discussion of what things may lead to risk. And of course, we think a lot about how bad actors might use this to get together with other biotechnology to cause harm. I think that a lot of the risks also center around viruses. And you can get some information from AlphaFold, but a lot of it also depends on more complex properties like virulence and transmission. AlphaFold is really at the low level, atomic details of all these things, and that’s quite distinct. But we are thinking about it actively. And before every release, that’s really part of what we’ve done for years and years now to assess these things.

Nieva: DeepMind merged with Google Brain last year. Has that changed any of the way that you do research or develop products?

Jumper: I can only really speak locally to the science team. Of course, there’s lots of things in terms of Gemini and how that works together. But in terms of science, I would say not much has changed. Or at least it’s improved. We’ve integrated some really great teams from Google Brain into the science unit, like [Google research scientist] Lucy Colwell’s team that’s worked on protein function and other things. Generally, I think it worked quite well. But for the science unit, I think we have the same mission. We have maybe expanded abilities. We can talk more about how we deploy it. We have more capabilities in that way.

Nieva: One interesting thing about this moment is the big AI frontier labs are getting tons of attention. There’s Google Deepmind, OpenAI, Anthropic, and others. As you think about rival labs, is there more pressure?

Jumper: Within science, at least, I don’t feel it that way. I think a lot more about all the other experimental techniques that compete with us. All the other great startups and other people working in computational biology. And really, for me, it’s also, this is such an interdisciplinary field, and I think we have a strong advantage in that it’s very hard. You can’t just parachute into science. Even within GDM [Google DeepMind], I remember early on, people would say, “Oh, protein structure prediction. That’s a sequence-to-sequence problem. Stand back, I know how to do sequence-to-sequence problems.” And then they try their idea, and it would inevitably not work. And you realize that you really have to work at the intersection of the scientific and the machine learning discipline. So I think that’s one of the great strengths of GDM, and will continue to be. I mean, there are excellent other labs. There are a lot of really great AI labs doing a lot of other things, but we’ve done some really great work within scientific ML [machine learning].

Nieva: You mentioned these two AI moments going on at the same time right now — chatbots and science. Gemini and ChatGPT get lots of attention because they are consumer products and everyday people can use them. But as someone working on science, have you found that there is any chatbot fatigue?

Jumper: I don’t think so, within science. I mean, one thing to say is that the vast majority of GDM doesn’t work on science. The vast majority of it works on other things, and quite a lot on chatbots. And there’s also LLM-related projects within science. I see it as enormous opportunity. I think it’s great. Also, I would be probably more worried about doing this without access to top-tier chatbots and LLM experts. Because that’s a reasonably specialized thing. But I don’t see it so much as worry. It’s an opportunity. The space is receiving enormous attention. There’s enormous investment in compute, which I’m very happy about — all these things, tooling and everything else. That means we get to spend more of our time worrying about the scientific aspects of it.

Nieva: Going forward, what are the big scientific problems you’re most excited about tackling?

Jumper: Really two things. I think one is simply that protein structure prediction will take us into drug development, be it small molecule or via protein design. This is just really, really exciting — that we will get qualitatively better at these problems in the next few years. The other one is really how AlphaFold can be used to inform how we understand more and more of the cell. And we’re seeing things like figuring out how proteins come together. We’re seeing larger and larger systems studied with AlphaFold and really creative uses. We’ll keep doing this, and it’ll get us out to really understanding the cell in a way that changes our science. That is really exciting to me.

The conversation has been edited and condensed for clarity.

MORE FROM FORBES

Share.
Exit mobile version