Google DeepMind CEO Demis Hassabis “AGI to appear in 5-10 years”
– Many tasks still remain, but claims the emergence of AGI is near
– Predicts ‘ASI’, surpassing human intelligence, will appear soon after AGI
Demis Hassabis, CEO of Google DeepMind, has projected that artificial general intelligence (AGI) smarter than humans could emerge in as soon as 5 years. He anticipates that the development of AI capable of directly competing with humans is still distant, but would soon become a reality. Hassabis, who led the development of ‘AlphaFold’, an AI predicting protein structures, was awarded the Nobel Prize in Chemistry last year.
On the 17th (local time), at a briefing held at DeepMind’s headquarters in London, Hassabis stated that AGI, which is as or more intelligent than humans, would come into existence in the next 5 to 10 years. “We will start transitioning to AGI within 5 to 10 years,” he explained, noting that there is still a lot of research work to be done before reaching that stage.
Hassabis also touched upon superintelligence (ASI), which he said will appear after AGI. “ASI will surpass human intelligence, but the timing of such a breakthrough is unknown,” he remarked.
This outlook by Hassabis is far more conservative compared to current predictions on the advent of AGI. Dario Amodei, CEO of AI startup Anthropic, an OpenAI rival, predicted at the Davos Forum in January that a form of AGI may arise within 2 to 3 years. Similarly, Tesla CEO Elon Musk speculated that AGI could appear next year, and OpenAI’s Sam Altman mentioned that AGI would emerge “in the near future.” Cisco’s Chief Product Officer Gitu Patel stated that we would witness significant evidence of AGI at work this year.
Hassabis explained that the greatest challenge in bringing AGI into being is achieving a level where AI can understand the context of the real world. “The issue is how quickly AGI can plan and reason, and how flexibly it can respond in real life,” he added. While systems that autonomously complete tasks in certain games, like Go, are currently feasible, developing AI models that comprehend the many simultaneous variables of the real world remains difficult.
To achieve this level, Google DeepMind is undertaking extensive work, including research into developing AI agents that learn to play strategic games like StarCraft. Unlike chatbots that provide simple answers, these AI agents actively interact with humans. “When considering communication between agents, allowing agents to express themselves is also a part of what we’re doing,” Hassabis explained.