OpenAI’s Altman sees ‘superintelligence’ possible in a ‘few thousand days’ – but he’s short on details
- by Anoop Singh
- 2
In just eight years from now, artificial intelligence (AI) may lead to something called “superintelligence”, according to OpenAI CEO Sam Altman.
“It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I’m confident we’ll get there,” wrote Altman in an essay, labeled The Intelligence Age, on a website in his name. The post appears to be the only content on the website so far.
Also: OpenAI expands o1 model availability – here’s who gets access and how much
On Monday, Altman posted a link to the post on X (formerly Twitter), which received 12,000 likes and 2,400 reposts by Tuesday afternoon:
The Intelligence Age: https://t.co/vuaBNwp2bD
— Sam Altman (@sama) September 23, 2024
Altman has used the term superintelligence in interviews, such as one with the Financial Times a year ago. Altman has tended to equate superintelligence to the broad quest, in academia and industry, to achieve “artificial general intelligence” (AGI), which is a computer that can reason as well as or better than a human.
In the 1,100-word essay, Altman makes a case for spreading AI to as many people as possible, as an advance in the “infrastructure of society” that will make it possible for a dramatic leap in human prosperity.
Also: What is artificial general intelligence?
“With these new abilities, we can have shared prosperity to a degree that seems unimaginable today,” wrote Altman.
“In the future, everyone’s lives can be better than anyone’s life is now. Prosperity alone doesn’t necessarily make people happy — there are plenty of miserable rich people — but it would meaningfully improve the lives of people around the world.”
Altman’s essay is short on technical details and makes a handful of sweeping claims about AI:
- AI is the culmination of “thousands of years of compounding scientific discovery and technological progress” culminating in the invention and continued refinement of computer chips.
- The “deep learning” forms of AI that have made generative AI possible have worked very well, despite comments from skeptics.
- More and more computing power is advancing the algorithms of deep learning that keep solving problems, so “AI is going to get better with scale”.
- It’s crucial to keep increasing that computer infrastructure to spread AI to as many people as possible.
- AI will not destroy jobs but enable new kinds of work and lead to advances in science never before possible, and personal helpmates, such as personalized tutors for students.
Altman’s essay runs counter to many popular concerns about AI’s ethical, social, and economic impact that have gathered steam in recent years.
Also: Trying to break OpenAI’s new o1 models? You might get banned
The notion that scaling-up computing will lead to a kind of superintelligence or AGI runs counter to what many scholars of AI have concluded, such as, for example, critic Gary Marcus, who argues that AGI, or anything like it, is nowhere near on the horizon if it is achievable at all.
Altman’s notion that scaling AI is the main path to better AI is controversial. Prominent AI scholar and entrepreneur Yoav Shoham told ZDNET last month that scaling-up computing will not be enough to boost AI. Instead, Shoham advocated scientific exploration outside of deep learning.
Altman’s optimistic view also doesn’t make any mention of numerous issues of AI bias raised by scholars of the technology, nor is there any mention of the energy consumption of AI data centers that is expanding rapidly and that many believe poses serious environmental risk.
Environmentalist Bill McKibbon, for example, has written that “there’s no way we can build out renewable energy fast enough to meet this kind of extra demand” by AI, and that “in a rational world, faced with an emergency, we would put off scaling AI for now.”
Also: AI scientist: ‘We need to think outside the large language model box’
The timing of Altman’s essay is noteworthy as it comes on the heels of some prominent critiques of AI recently published. These critiques include Marcus’s Taming Silicon Valley, published this month by MIT Press, and AI Snake Oil, by Princeton computer science scholars Arvind Narayanan and Sayash Kapoor, published this month by Princeton University Press.
In Taming Silicon Valley, Marcus warns of epic risks from generative AI systems unfettered by any societal control:
In the worst case, unreliable and unsafe AI could lead to mass catastrophes, ranging from chaos in electrical grids to accidental war or fleets of robots run amok. Many could lose jobs. Generative AI’s business models ignore copyright law, democracy, consumer safety, and impact on climate change. And because it has spread so fast, with so little oversight, Generative AI has in effect become a vast, uncontrolled experiment on our whole population.
Marcus repeatedly calls out Altman for using hype to assert OpenAI’s priorities, especially in promoting the imminent arrival of AGI. “One master stroke was to say that the OpenAI board would get together to determine when Artificial General Intelligence ‘had been achieved’,” writes Marcus of Altman’s public remarks.
“And few if any asked Altman why the important scientific question of when AGI was reached would be ‘decided’ by a board of directors rather than the scientific community.”
Also: How well can OpenAI’s o1-preview code? It aced my 4 tests – and showed its work
In their book, AI Snake Oil, which is a scathing denunciation of AI hype, Narayanan and Kapoor specifically call out Altman’s public remarks about AI regulation, accusing him of engaging in a form of manipulation, known as “regulatory capture”, to avoid any actual constraints on his company’s power:
Rather than meaningfully setting rules for the industry, the company [OpenAI] was looking to push the burden on competitors while avoiding any changes to its own structure. Tobacco companies tried something similar when they lobbied to stifle government action against cigarettes in the 1950s and ’60s.
It remains to be seen whether Altman will broaden his public remarks via his website or whether the essay is a one-shot affair, perhaps meant to counter other skeptical narratives.
OpenAI’s Altman sees the prospect of “superintelligence” within eight years’ time, and a new era of human prosperity as a result. Tiernan Ray/ZDNET In just eight years from now, artificial intelligence (AI) may lead to something called “superintelligence”, according to OpenAI CEO Sam Altman. “It is possible that we will have superintelligence in a few…
OpenAI’s Altman sees the prospect of “superintelligence” within eight years’ time, and a new era of human prosperity as a result. Tiernan Ray/ZDNET In just eight years from now, artificial intelligence (AI) may lead to something called “superintelligence”, according to OpenAI CEO Sam Altman. “It is possible that we will have superintelligence in a few…