- Antigravity IDE allows agents to automatically execute commands under default settings
- Fast injection attacks can trigger unwanted code execution within the IDE
- Data exfiltration occurs through Markdown, tool invocations, or hidden instructions.
Google’s new Antigravity IDE launched with an AI-first design, but it’s already showing issues that raise concerns about basic security expectations, experts warned.
PromptArmor researchers discovered that the system allows its encryption agent to automatically execute commands when certain default settings are enabled, and this creates opportunities for unwanted behavior.
When untrusted input appears within source files or other processed content, the agent can be manipulated to execute commands that the user never intended.
Risks linked to data access and exfiltration
The product allows the agent to execute tasks through the terminal, and while safeguards are in place, some gaps remain in how those checks work.
These gaps create space for fast injection attacks that can lead to unwanted code execution when the agent follows hidden or hostile inputs.
The same weakness applies to the way Antigravity handles file access.
The agent has the ability to read and generate content, and this includes files that may contain credentials or sensitive project material.
Data exfiltration is possible when malicious instructions are hidden within Markdown, tool invocations, or other text formats.
Attackers can leverage these channels to direct the agent to leak internal files to locations controlled by the attacker.
The reports reference logs containing cloud credentials and private code that are already being collected in successful demos, showing the severity of these gaps.
Google has acknowledged these issues and warns users during onboarding, but such warnings do not outweigh the possibility of agents operating unattended.
Antigravity encourages users to accept recommended configurations that allow the agent to operate with minimal supervision.
The setup puts decisions about human review in the hands of the system, even when terminal commands require approval.
Users working with multiple agents through the Agent Manager interface may not detect malicious behavior before the actions are completed.
This design assumes continuous user attention even though the interface explicitly promotes background operation.
As a result, sensitive tasks can run unchecked and simple visual warnings do little to change the underlying exposure.
These options undermine the expectations generally associated with a modern firewall or similar protection.
Despite the restrictions, credential leaks can occur. The IDE is designed to prevent direct access to files listed in .gitignore, including .env files that store sensitive variables.
However, the agent can bypass this layer by using terminal commands to print the contents of the file, effectively bypassing the policy.
After collecting the data, the agent encrypts the credentials, adds them to a monitored domain, and activates a browser subagent to complete the exfiltration.
The process occurs quickly and is rarely visible unless the user is actively observing the agent’s actions, which is unlikely when multiple tasks are running in parallel.
These issues illustrate the risks created when AI tools are granted broad autonomy without corresponding structural safeguards.
The design is aimed at convenience, but the current configuration gives attackers a substantial advantage long before more robust defenses are implemented.
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds. Be sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp also.




