- Sam Altman says that humanity is “close to the construction of digital superintelligence”
- Smart robots that can build other robots “are not so far”
- He sees “whole classes of works disappearing”, but “the capabilities will rise equally fast, and we will all get better things”
In a long blog post, the OpenAi CEO, Sam Altman, has established his vision of the future and reveals how artificial general intelligence (AGI) is now inevitable and is about to change the world.
In what could be seen as an attempt to explain why we have not yet achieved AGI, Altman seems to trigger that the progress of AI as a soft curve instead of a rapid acceleration, but that we now pass the horizon of the event “and that” when we look back in a few decades, gradual changes will have been equivalent to something big. “
“From a relativistic perspective, singularity occurs little by little,” writes Altman, “and the merger occurs slowly. We are uploading the long arch of exponential technological progress; it always seems vertical looking forward and appearance backwards, but it is a soft curve.”
But even with a more decelerated timeline, Altman is confident that we are on the way to AGI, and predict three ways in which he will shape the future:
1. Robotics
Of particular interest to Altman is the role that robotics will play in the future:
“2025 has seen the arrival of agents who can do a real cognitive job; writing computer code will never be the same. 2026 will probably see the arrival of systems that can discover new ideas. 2027 You can see the arrival of robots that can do homework in the real world.”
To do real tasks in the world, as Altman imagines, robots would have to be humanoids, since our world is designed to be used by humans, after all.
Altman says: “… The robots that can build other robots … are not so far. If we have to make the first millions of old humanoid robots used, but then the entire supply chain can operate: dig and refine minerals, drive trucks, run factories, etc., to build more robots, which can build more chip manufacturing facilities, data centers, etc., then, the progress rate will be obviously quite different.” “
2. Employment losses but also opportunities
Altman says that society will have to change to adapt to AI, on the one hand through job losses, but also through greater opportunities:
“The rate of technological progress will continue to accelerate, and will continue to be the case that people are able to adapt to almost anything. There will be very difficult parts as complete classes of work that disappear, but on the other hand the world will become so rich so fast that we can seriously entertain new policy ideas that we could never before.”
Altman seems to balance the changing labor panorama with the new opportunities that the superintelligence will bring: “… perhaps we will resolve high -energy physics for one year to the initial spatial colonization next year; or from a great advance of the science of materials one year to the true interfaces of the high -level high level brain of high level next year.”
3. AGI will be cheap and widely available
In the new and bold future of Altman, the superintelligence will be cheap and widely available. When describing the best way forward, Altman suggests that we solve the “alignment problem”, which implies “… AI systems to learn and act towards what we really want collectively in the long term.”
“So [we need to] Realize that the superintelligence is cheap, widely available and not too concentrated with any person, company or country … giving users a lot of freedom, within Broad Bounds that society has to decide, it seems very important. The sooner the world can start a conversation about what these broad limits are and how we define collective alignment, the better. ”
It is not necessarily so
When reading Altman’s blog, there is a kind of inevitability behind his prediction that humanity is marching uninterrupted towards AGI. It is as if he had seen the future, and there is no place to doubt in his vision, but is he right?
Altman’s vision contrasts with Apple’s recent article that suggested that we are much further from achieving AGI than many AI defenders would like.
“The illusion of thought”, a new Research Work by Apple, states that “despite its sophisticated self -reflection mechanisms learned through reinforcement learning, these models do not develop generalizable problem solving capabilities to plan tasks, with a zero performance collapse beyond a certain complexity threshold.”
The research was conducted in large reasoning models, such as O1/O3 models of OpenAI and the thought of the sonnet Claude 3.7.
“Particularly worrying is the contradictory reduction in the reasoning effort as problems address critical complexity, which suggests an inherent limit of computer scale in the LRM,” says the document.
On the contrary, Altman is convinced that “intelligence too cheap for the meter is up to it. This may sound crazy to say it, but if we told you in 2020, we would be where we are today, it probably seemed crazier than our current predictions around 2030”.
As with all the predictions about the future, we will discover if Altman is right soon.