- The trial of Elon Musk and Sam Altman led to a debate about the extinction of AI
- The judge concluded that AI could pose a threat in the real world
- The case could reshape OpenAI and the future of ChatGPT
“This is a real risk, we could all die as a result of artificial intelligence.”
That stark warning cut through a tense courtroom this week as Elon Musk’s legal battle with Sam Altman took an unexpected turn, briefly pivoting from a corporate dispute to a debate over whether AI could wipe out humanity.
Judge Yvonne González Rogers quickly shut it down, reminding Musk’s attorney, Steven Molo, to stay focused on the issue at trial, delivering a withering rebuttal:
Article continues below.
“It’s ironic that your client, despite these risks, is building a company that is in the exact space,” Rogers said. “There are some people who do not want to put the future of humanity in the hands of Mr. Musk. But we are not going to get into that business.”
The fight between Musk and Altman
The lawsuit between Musk and OpenAI is the latest chapter in a dispute between rival CEOs Musk and Altman that has been brewing for years. Much of this has played out through public comments and online attacks, but it has now become a month-long federal court case in California.
At the center of Musk’s claim is the accusation that OpenAI, the company he co-founded in 2015, strayed from its original nonprofit mission. It maintains that Altman betrayed the public’s trust by turning the organization into a for-profit company.
Musk also named OpenAI president Greg Brockman and Microsoft as part of the case, alleging they played a role in the company’s shift toward commercialization, allegations Microsoft denies.
The judge is right, of course. This case is not about whether AI should exist. This is about the future direction of OpenAI. A Musk victory could trigger a major restructuring at the company and potentially even lead to Altman’s ouster as CEO.
But the fact that extinction arose in all cases points to the real story here: whether AI could pose an existential threat to humanity.
An old debate
The technology being discussed in abstract terms is already here, integrated into tools like ChatGPT and rapidly spreading into everyday life. The people at the center of the case are the same figures shaping the future of AI, and moments like this week’s court exchange point to unresolved issues beyond a corporate battle.
Even as AI becomes increasingly integrated into everyday products, there is still no consensus among its creators about how risky it really is. Some present it as a transformative tool that will improve productivity, creativity and access to information. Others continue to warn, sometimes in uncompromising terms, of long-term dangers that are harder to define, let alone regulate.
The same companies racing to deploy smarter, faster AI tools are also sometimes the ones expressing concern about where that race might lead. That tension is not new, but it is rarely expressed so directly and almost never in a legal environment like this.
The trial is expected to last several weeks, with billions of dollars and the future structure of OpenAI at stake. But it also captures the central contradiction of the AI era right now: The people building the technology are still debating how dangerous it could be, even as they continue to build it at high speed.
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds. Be sure to click the Follow button!

The best business laptops for every budget




