- Google plans to announce at Google I/O that it will make AI the main way people interact with their phones
- Android 17 and Gemini will handle everyday tasks automatically
- The apps will still exist, but mostly in the background.
Google’s big I/O conference is coming soon, and the tech giant has big plans to integrate its AI models so deeply that they will practically replace standard applications. The company will show updates spanning Android 17, Chrome, and Gemini, but they all aim to replace tapping on an app’s menus with a direct, simple request that AI can interpret and perform on its own.
For the average person, that means the phone in your hand is about to feel a little less like a collection of apps and a little more like something that works for you.
Think about how many small actions go into a simple task like ordering food or responding to messages. You may be jumping between apps, copying information, and making decisions at every turn. Google’s new approach is designed to eliminate those intermediate steps.
Article continues below.
With Android 17, this takes the form of what the company calls agent automation. You tell your phone what you want and it figures out how to do it. Instead of opening three apps to plan dinner and a movie, you can simply order something fun and close, and the system pulls together options, checks your schedule, and helps you make a decision.
The difference is not just speed, but what you focus on. You can focus on results while your phone handles all the tasks in between. Google’s “Adaptive Everywhere” plan extends beyond a single device. Instead, AI agents will follow you digitally. You can start planning something on your phone, continue it on a laptop, and pick it up later in your car or on a larger screen at home. The AI keeps track of what you were doing, so you don’t have to start over.
invisible applications
The changes Google has in mind won’t eliminate apps, but you may find them taking up less of your mind. Google is reversing the order of choosing an app and then starting a task. Instead, you’ll start by asking the device to perform a task and the AI will determine which apps to use without you seeing them.
In Chrome, for example, new AI features will help organize information and assist with tasks that span multiple sites. Gemini sits in the center, connecting everything and making decisions about how to complete what you ask of them.
Google clearly hopes this will simplify things for users, as each interaction will have the same basic form regardless of which apps the AI uses. But some may find it unsettling to give up control and let AI anticipate and complete actions on their behalf.
There are still limits to how far this can go. Systems that take on more responsibilities must be accurate and reliable, especially when it comes to personal information. There is also an adjustment in the way people think about using technology. Describing what you want is different from navigating step by step. It takes a little time to trust the system to do the right thing.
The applications will still be there, doing what they have always done. You may not notice them as much. And once that becomes the normal way of doing things, tapping through menus again can start to feel like switching to dial-up Internet.
The best business laptops for every budget
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds.




