- Gemini can now chain actions to complete complex tasks
- Gemini Live is gaining multimodal capabilities in newer phones
- Gemini will evolve into a fully powerful AI assistant with Project Astra
Coinciding with the launch of the Samsung S25 range of devices, at today’s Galaxy Unpacked, Google announced some impressive updates to its Gemini AI platform. Many of the improvements are specific to devices like the new Samsung S25, but some also work on older Samsung S24 and Pixel 9 phones.
The standout feature is Gemini’s new ability to chain actions. This means you can now do things like connect to Google Maps to find nearby restaurants and then compose a text in Google Messages to send to people you’d like to invite to lunch, all through Gemini commands.
Chaining capability is being added to all devices running Gemini, “depending on extensions,” meaning that extensions to link the particular app to Gemini will need to be written by a developer to be included. Naturally, all of Google’s main apps already have extensions for Gemini, but there are also extensions available for the Samsung Reminder, Samsung Calendar, Samsung Notes, and Samsung Clock apps.
Gemini Live goes multimodal
Google’s Gemini Live, the part of Gemini that gives you the opportunity to have a natural, human conversation with AI, is also getting major multimodal updates. Now you’ll be able to upload images, files, and YouTube videos to the conversation you’re having, so, for example, you could ask Gemini Live, “Hey, look at this photo from my school project and tell me how.” I could improve this,” then upload the image and receive a response.
However, Gemini’s multi-modal improvements are not available across the board and will require a Galaxy S24, S25, or Pixel 9 to work.
Look
Project Astra
Finally, Google has announced that Project Astra capabilities will arrive in the coming months and will arrive first on the Galaxy S25 and Pixel phones. Project Astra is Google’s prototype AI assistant that lets you interact with the world around you, asking questions about what you’re looking at and where you are using your phone’s camera. So you can simply point your phone at something and ask Gemini to tell you something about it, or ask you when the next stop on your bus route will be.
Project Astra works on mobile phones, but takes your experience to the next level when combined with Google’s hands-free AI glasses prototype, so you can simply start asking Gemini questions about what you’re looking at, without having to interact with a screen at all.
Look
While there’s no word yet on a release date for this next generation of Google glasses, they will join Meta Ray-Ban glasses in the emerging AI wearable market when they finally become available.