I’ve been writing about AI for over a year and there’s still no such thing as a quiet week. There’s always a lot to catch up on. Sometimes it’s positive, often worrying, and sometimes downright strange. This week is no exception, particularly as broader geopolitical tensions shape and are shaped by AI in increasingly visible ways. We are on the verge of new models, new infrastructures and, inevitably, new concerns.
ICYMI: AI WEEK
This article is part of our ICYMI franchise, where we round up the biggest stories of the week, this time in AI.
This week’s feature story captures something I’ve seen repeatedly over the past year: big, bold claims that meet real-world boundaries. Microsoft now suggests that Copilot should be used “for entertainment purposes only,” which seems like a big change in tone. On top of that, there’s an in-depth and fascinating New York profile of Sam Altman, new insight into OpenAI’s revenue numbers, and new concerns surrounding Anthropic’s latest model, Claude Mythos. Mounting tension, mounting excitement, and always the feeling that there is too much AI news to absorb, which is exactly why this roundup exists. There’s also a quiz at the bottom to test your knowledge, so keep an eye out.
Article continues below.
Do you know the Microsoft co-pilot? The AI tool considered essential for the modern workplace and an iconic example of how AI can transform productivity? According to Microsoft’s official terms and conditions, it’s “for entertainment purposes only.”
OpenAI, Google, and Anthropic have similar disclaimers in their own terms. But what matters is the gap between how these tools are sold and what the fine print says. Microsoft wants companies to continue using Copilot. But the language puts the responsibility back on the user if something goes wrong.
This is a pattern we’ve seen in AI therapy, AI friendship, AI life coaching, and even AI romantic partners. AI tools can perform certain functions very well, but the risk is yours. So the big question here is not whether AI will make mistakes or not, we know it will. It will be a question of who will be responsible when this happens. And right now, AI companies are doing everything they can to make sure it’s not them.
OpenAI, the maker of ChatGPT, says it’s making a lot of money. Does that mean the AI bubble won’t burst?

One of the biggest questions hanging over the entire AI industry right now is whether it is actually making money. The answer is yes, but maybe not in the way you think. These are less about people using ChatGPT for prescriptions or nightly health spirals and more about companies paying to integrate AI into their products and workflows.
But even if you don’t wear it like this at work, this is important. Because income changes the trajectory significantly. If companies can make a lot of money from AI, it will be harder to argue that this is a passing hype cycle or a bubble that is about to burst at any moment. It also signals where things are headed, which is a greater focus on enterprise customers. Which could mean potentially higher costs or stricter limits for regular users.
Iran threatens to bomb $30 billion Stargate AI data center backed by OpenAI, Nvidia and other tech giants

Reports suggest that Iranian officials have referenced technological infrastructure as a potential target in the event of an escalation with the United States and its allies. The most attention-grabbing project is Stargate, which is a large data center initiative in the United Arab Emirates backed by major tech players including OpenAI. It is designed to provide large amounts of the computing power needed to train and run advanced AI systems.
This is important because it shows us how dependent AI is on massive infrastructure, requiring enormous amounts of energy and stable geopolitical conditions to function. For everyday users, it also shows that all the tools we rely on depend on that infrastructure. If it becomes too expensive, politically contested or environmentally damaging, that could mean much higher costs, less access and slower progress.
More AI news you may have missed
Were you paying attention? Take our AI news quiz
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds. Be sure to click the Follow button!
And of course, you can also follow TechRadar on YouTube and tiktok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp also.




