What is AGI? The artificial intelligence milestone and its role in OpenAI tumult
The possibility of advanced artificial intelligence and the idea that it could disrupt the economy or even threaten humanity may have played a role in the chaos in recent days at leading AI firm OpenAI.
In the absence of a clear statement from OpenAI about why its board unexpectedly fired Sam Altman as CEO on Friday, some industry insiders theorize that the split may have occurred over the company's approach to "artificial general intelligence," or AGI.
Here's what AGI is and why it matters to the legacy of OpenAI.
What is AGI?
AGI is a term that describes AI models that are equally or more intelligent than the average human.
OpenAI defines AGI as "AI systems that are generally smarter than humans."
Others have provided slightly different definitions for the term, and there is no consensus as to when a model has attained AGI.
Google's DeepMind, for example, recently put out a five-level taxonomy of AGI based on its capabilities, ranging from "emerging AGI," which includes OpenAI's ChatGPT, to a theoretical artificial superintelligence that outperforms 100% of humans.
Future of Life Institute founder Max Tegmark told the that the creation of AGIs is "playing God" and would "make humans completely obsolete."
Tegmark and other AI researchers argue that AGIs could be developed in the next few years. That's why he said that Congress must set guardrails for regulating AGIs soon, or the technology will be well beyond their grasp.
OpenAI and AGI
OpenAI was founded in 2015 to create an AGI that would benefit "humanity as a whole." These programs, according to OpenAI, would be advanced enough to outperform any person at "most economically valuable work." While programs such as ChatGPT and Bing can process and make certain calculations, they're generally not considered AGIs.
Several reports over the weekend quoted unnamed sources at the company as saying that OpenAI's board removed Altman from his role because he was hurrying the company's products through development without giving the company's safety team enough time to create guardrails. This, alongside some passing remarks from Altman at public events, has led some to theorize that OpenAI may have made an AGI but did not want to reveal it to the public.
My prediction is that a few weeks ago the team at OpenAI demoed either a machine that showed consciousness or AGI and Sam didn’t immediately tell the board and their feelings were hurt. pic.twitter.com/IUDcHiIwIU — Dan Siroker November 18, 2023Altman does not appear convinced that AGIs will come any time soon. "We need another breakthrough," Altman told students at Cambridge on Nov. 1 when asked how to make an AGI. One student suggested that researchers continue to update large language models, the technology behind most chatbots, to improve their ability to process complex information. Altman argued that this would not be enough to create an AGI and that an AI model should not be considered an AGI unless it can perform specific tasks, such as discovering new parts of physics.
Ilya Sutskever, OpenAI's lead scientist, has argued that the bar for AGI is that the model can do anything humans can do. Sutskever was more convinced than Altman that AGIs would arrive soon and spent more time on preventing its possible dangers within OpenAI, according to the . As Sutskever grew more confident in the power of OpenAI's models, he embedded himself in a camp of employees who saw a more existential risk posed by OpenAI's efforts to make an AGI. Altman, in contrast, focused more on how to turn OpenAI's research into a commercially viable product. This tension could have played a role in the decision of the board, of which Sutskever is a member, to fire Altman.