Google’s AI Future: Hassabis’ Herculean Task


The Gist

  • AI research leader. Demis Hassabis heads Google’s unified AI research, aiming to maintain Google’s cutting-edge status in AI.
  • Breakthroughs to products. Hassabis’ challenge is to translate DeepMind’s AI breakthroughs into tangible products for Google.
  • Navigating Google’s bureaucracy. Success depends on Hassabis’ ability to push AI advancements through Google’s conservative product organization.

Demis Hassabis stares intently through the screen when I ask him whether he can save Google. It’s early evening in his native U.K. and the DeepMind founder is working overtime. His Google-owned AI research house now leads the company’s entire AI research effort, after ingesting Google Brain last summer, and the task ahead is immense.

Google Thrives, AI Questions Linger

Google’s core business is thriving, but that almost seems beside the point. Hassabis and I are speaking on Google Meet, in an interview arranged via Gmail, scheduled on Google Calendar and researched via Google Search. Largely thanks to these core products, Google posted $307 billion in revenue last year, growing 13% in the fourth quarter, and is trading near its all-time high. But questions about its ability to win the AI race, or even competently run it, have clouded its recent success. 

“I don’t really see it like that,” Hassabis said, challenging the premise of my question. Artificial intelligence, he said, will “disrupt many, many things. And of course, you want to be on the cutting edge of that, influencing that disruption, rather than on the receiving end.”

Related Article: What You Need to Know About Google Bard

Leading Google’s AI Breakthroughs

Hassabis is the person who’s supposed to be keeping Google on that cutting edge. The award winning researcher and neuroscientist — who was just knighted on Thursday — has led a dynamic AI team within Google responsible for numerous breakthroughs. Since its 2014 acquisition, DeepMind has cracked a seemingly impossible board game with AlphaGo, decoded protein with AlphaFold, and laid the groundwork for synthesizing thousands of new materials, all via revolutionary AI models.

Mobile phone with logo of AI company DeepMind Technologies Limited on screen in front of the AlphaFold web page in piece about Google DeepMind's Demis Hassabis.
Since its 2014 acquisition, DeepMind has cracked a seemingly impossible board game with AlphaGo, decoded protein with AlphaFold, and laid the groundwork for synthesizing thousands of new materials, all via revolutionary AI models. Timon on Adobe Stock Photos

Related Article: Why Google Renamed Bard to Gemini

Balancing AI With Google’s Core Business

But Hassabis and the combined Google DeepMind team must now translate those types of breakthroughs into tangible product improvements for a $1.8 trillion company seeking a way forward in an increasingly AI world. And he must do it all without killing a search advertising business that serves up the lucrative blue links AI threatens. 

Related Article: Inside the Crisis at Google

Steering Google’s AI Course

Late on chatbots, rife with naming confusion, and with an embarrassing image generation fiasco just in the rearview mirror, the path forward won’t be simple. But Hassabis has a chance to fix it. To those who know him, have worked alongside him, and still do — all of whom I’ve spoken with for this story — Hassabis just might be the perfect person for the job. 

“We’re very good at inventing new breakthroughs,” Hassabis tells me. “I think we’ll be the ones at the forefront of doing that again in the future.”

Related Article: Conversational AI Brings Google Gemini to Google Ads

From Brains to Computers

Born in July 1976 to a Chinese-Singaporean mother and Greek Cypriot father, Hassabis began thinking about AI as a boy in North London. A young chess master with professional aspirations, Hassabis noticed at 11 years old that the electronic chess board he’d been training against had some form of intelligence inside and grew interested in the tech. “I was fascinated by how this lump of plastic was programmed to be able to play chess,” he said. “I started reading some books about it and programming my own little AI games.” 

Related Article: Google’s Gemini Marketing Trick

Games, Neuroscience & AI

After co-creating the hit game Theme Park at age 17, Hassabis went on to study computer science at Cambridge before returning to game development in his 20s. By then, rudimentary AI systems were growing ubiquitous in gaming and Hassabis decided he’d need to understand how the human brain works if he were to make a difference in the field. So he enrolled in a graduate neuroscience program at University College London and then did postdocs at MIT and Harvard.

Related Article: Google Revises Image Recognition for Gemini as It Sets a Relaunch

Broadly Smart, Effortlessly Convincing

“He was very smart, and in a different way than some of the other smart people I know,” said Tomaso Poggio, an MIT professor, computational neuroscience pioneer and postdoc adviser to Hassabis. “It’s not that he is technically a magician, in any one area — well, maybe chess — but he’s broadly smart about everything you can speak of. And it’s very convincing, without any effort.” 

Blending It All Into New AI Advancements

One night, Poggio hosted Hassabis for dinner, and his student had an idea brewing for a new company that would employ lessons from neuroscience to advance the state of AI. Artificial brains could work similar to humans, he believed. And games could simulate real world environments, an ideal training ground. 

After the dinner, Poggio asked his wife if they should invest in Hassabis’ new company and, having just met him, she told him to get in. Poggio became one of DeepMind’s earliest investors, though he wishes he’d given more cash to Hassabis. “It was a good thing to do. Unfortunately, it was not enough money,” he said.

Spearheading AI’s Reinforcement Learning Breakthrough

In DeepMind’s early days, Hassabis executed the vision by running AI agents through game simulations. In doing so, he helped advance reinforcement learning, a type of AI training where you run a bot with zero instruction, giving it countless opportunities to fail so eventually it learns what it needs to do to win. 

“They had an agent playing all the Atari Games,” said Tejas Kulkarni, an AI researcher who worked at DeepMind and is now CEO of AI startup Common Sense Machines. “This was the first time that deep reinforcement learning proved itself. It was like, holy shit. This is the place to be. Everyone’s flocked there, including me.”

AlphaGo: DeepMind’s Milestone in AI Evolution

If Atari was an appetizer, AlphaGo was the main course. Go is a board game with more playable combinations than atoms in the universe, an “Everest” of AI as Hassabis calls it. In March 2016, DeepMind’s AlphaGo — a program that combined reinforcement learning and deep learning (another AI method) — beat Go grandmaster Lee Sedol, four games to one, over seven days. It was a watershed moment for AI, showing that with enough computing power and the right algorithm, an AI could learn, get a feel for its environment, plan, reason and even be creative. To those involved, the win made achieving artificial general intelligence — AI on par with human intelligence — feel tangible for the first time.

“That was pure magic, said Kulkarni of the Go win. “That was the moment where people were like, okay, AGI is coming now.”

“We’ve always had this 20 year plan from the start of DeepMind,” said Hassabis, when asked about AGI. “I think we’re on track, but I feel like that was a huge milestone that we knew what needed to be crossed.”

Enter OpenAI

As DeepMind rejoiced, a serious challenge brewed beneath its nose. Elon Musk and Sam Altman founded OpenAI in 2015; and despite plenty of internal drama, the organization began working on text generation. 

Ironically, a breakthrough within Google — called the transformer model — led to the real leap. OpenAI used transformers to build its GPT models, which eventually powered ChatGPT. Its generative “large language” models employed a form of training called “self-supervised learning,” focused on predicting patterns, and not understanding their environments, as AlphaGo did. OpenAI’s generative models were clueless about the physical world they inhabited, making them a dubious path toward human level intelligence, but would still become extremely powerful.





Source link

IPAD SHOW ROOM
Logo