OpenAI CEO Urges US to Prepare for Risks and Benefits of AI ‘Superintelligence’

OpenAI CEO Sam Altman said US policymakers must act now to prepare for advanced artificial intelligence, warning that the technology is moving from theory to everyday economic use.

In an interview with Axios, Altman said AI systems already handle coding and research tasks that once required teams of programmers. Newer models will go further, he said, helping scientists make important discoveries and allowing individuals to do the work of entire groups.

That shift is already visible in cybersecurity, where some industry leaders say artificial intelligence is tipping the scales toward attackers.

Charles Guillemet, chief technology officer at hardware wallet maker Ledger, for example, told CoinDesk that AI tools are reducing the cost and skills needed to find and exploit software flaws. Tasks that previously took months, such as reverse engineering code or linking multiple vulnerabilities, can now be completed in seconds with the right prompts.

Last year, the crypto industry suffered more than $1.4 billion in assets stolen or lost in attacks. That number could continue to grow, Guillemet suggested. Additionally, developers are increasingly relying on AI-generated code, which can potentially introduce new bugs at scale.

The answer, he said, will require stronger defenses, such as mathematically verified codes, hardware devices that keep private keys offline, and a broader recognition that systems can fail.

AI in cybersecurity and biosecurity

While Altman noted that AI could accelerate drug discovery or materials science, he also noted that it could also enable more powerful cyberattacks and lower the barrier to harmful biological research. These threats may emerge within a year, making coordination between the government, technology companies and security groups urgent.

“We’re not that far from a world where there are incredibly capable open source models that are very good at biology,” he said. “The need for society to be resilient to terrorist groups that use these models to try to create new pathogens is no longer a theoretical question.”

Another example he suggested was a “world-shaking cyberattack” that could happen as early as this year. Avoiding that, he said, would require an “enormous amount of work.”

He framed OpenAI’s policy ideas as a starting point, aiming to drive debate about how to manage systems that learn fast and perform across many fields. He said it’s important to use AI to help defend against these potential attacks.

On the possible nationalization of OpenAI, Altman said the case against it is based on the need for the United States to achieve “superintelligence” before its rivals.

“The most important argument against nationalization would be that we need the United States to succeed in building a superintelligence in a way that is aligned with America’s democratic values ​​before anyone else does it,” he said. “That probably wouldn’t work as a government project, I think that’s kind of sad.”

Still, Altman said he believes companies involved in AI should work closely with the US government.

Given his role at OpenAI, Altman also has a financial interest in how the sector evolves. That position can shape how you frame both the urgency of regulation and the role of private companies like OpenAI in managing emerging risks, which could influence the company’s competitive position.

AI as a utility

Energy is one area where it sees rapid progress because more processing power capacity could keep costs down as demand for AI grows.

Altman also pointed to early signs of job changes. A programmer in 2026, he said, already works differently than one from the previous year.

AI will become a kind of utility, like electricity, embedded in all devices, while the cost of basic intelligence falls and higher-end systems remain expensive.

“You’ll have this super personal assistant running in the cloud,” Altman said. “If you use it a lot or with high levels of intelligence you will have a higher bill per month and if you use it less, you will have a lower bill.”

It is “incredibly important that the people who create AI are people of high integrity and trustworthiness.”

Leave a Comment

Your email address will not be published. Required fields are marked *