Sam Altman publishes a new techno-bliss text

In just eight years, artificial intelligence (AI) could lead to what is called “superintelligence,” according to Sam Altman, CEO of OpenAI.

“We may have superintelligence in a few thousand days. It may take longer, but I’m sure we’ll get there,” Mr. Altman wrote in an essay titled “The Age of Intelligence” on website in his name. This text appears to be the only content on the site so far.

Sam Altman coined the term “superintelligence”. He tends to equate superintelligence with the general demand in academia and industry for “artificial general intelligence” (AGI). That is, a computer capable of reasoning as well, or even better, than a human being.

“A Spectacular Leap Forward in Human Prosperity”

In this 1,100-word essay, Mr. Altman lays out the case for bringing AI to as many people as possible. It must make possible a “magnificent leap in human prosperity.”

“In the future, everyone’s life can be better than it is today.” Prosperity by itself doesn’t necessarily make people happy – there are plenty of rich people who are unhappy – but it would improve greatly, which would improve the lives of people around the world.

Mr. Altman’s essay doesn’t go into much technical detail and makes only a few broad claims about AI:

  • AI is the culmination of “thousands of years of scientific discovery and technological advancement,” culminating in the invention and continuous improvement of microchips.
  • The “deep learning” forms of AI that made generative AI possible work very well, despite the comments of skeptics.
  • More and more computing power is improving deep learning algorithms that continue to solve problems, so “AI will get better.”
  • It is essential that we continue to scale the IT infrastructure to bring AI to as many people as possible.
  • AI will not destroy jobs, but it will enable new types of work and lead to unprecedented scientific advances, as well as personal assistants such as personalized tutors for students.

AI bias

The very idea of ​​superintelligence is at odds with many AI experts, such as Gary Marcus, who say that artificial general intelligence is a long way off, if even achievable.

Altman’s idea that augmenting AI is the primary way to improve AI is controversial. Yoav Shoham, a prominent AI scientist and entrepreneur, told ZDNET last month that increasing computing power alone will not boost AI. Mr. Shoham instead advocates research beyond deep learning.

Altman’s optimistic view is also not mentioned the many issues with AI bias raised by technology specialists. Nor the energy consumption of AI data centers that growing fast and which many believe poses a serious risk to the environment.

“There’s no chance we’ll develop renewable energy fast enough to meet AI demand”

Environmentalist Bill McKibben, for example, wrote that “there’s no chance we’ll develop renewable energy fast enough to meet this kind of additional demand” from AI, and that “in a rational world facing an emergency, we’d delay expanding AI for now.”

Altman’s essay is timely as it follows the recent publication of important critiques of AI. They include Taming of the Silicon Valley to Marcus published this month by MIT Pressand AI Snake oilby Arvind Narayanan and Sayash Kapur, computer science researchers at Princeton, published this month by Princeton University Press.

c Taming of the Silicon ValleyGary Marcus warns of the significant risks posed by generative AI systems beyond society’s control:

In a worst-case scenario, unreliable and dangerous AI can lead to mass disasters, ranging from chaos in power grids to random war or rampaging robot fleets. Many people may lose their jobs. Generative AI business models ignore copyright, democracy, consumer safety, and climate change impacts. And because it spread so quickly, with so little oversight, generative AI has effectively become a huge, uncontrolled experiment on our entire population.

Marketing strategy

Mr. Marcus has repeatedly criticized Mr. Altman for using media hype to champion OpenAI’s priorities, including promoting the imminent arrival of AGI. It was a masterstroke to say that OpenAI’s board would meet to determine when artificial general intelligence was “achieved,” Marcus wrote.

And almost no one asked Altman why the important scientific question of when AGI will be reached “will be ‘solved’ by a board and not by the scientific community.”

In their book, AI Snake oilwhich is a scathing indictment of AI advertising, Narayanan and Kapoor specifically address Altman’s remarks about AI regulation, accusing him of engaging in a form of manipulation known as “regulatory capture” to avoid any real limitation of his company’s power:

Instead of establishing meaningful rules for the industry, the company (OpenAI) seeks to shift the burden onto its competitors while avoiding changing its own structure. Tobacco companies tried something similar when they lobbied to stifle government action against cigarettes in the 1950s and 1960s.

It remains to be seen whether Altman will expand on his remarks via his website or if the essay is an phenomenon.

Leave a Comment