- Google Cloud Services Dominate Leaked Credentials Across Android Ecosystem
- Hundreds of Firebase Databases Show Clear Signs of Automated Compromise
- Exposed storage repositories leaked hundreds of millions of files
A major security investigation analyzed 1.8 million Android apps available on the Google Play Store, focusing on those that explicitly claim artificial intelligence features, and identified worrying security flaws that may be exposing secrets.
From the initial research group, cyber news Researchers identified 38,630 Android AI apps and examined their internal code for exposed credentials and cloud service references, finding widespread flaws in data handling that extended far beyond isolated developer errors.
Overall, the researchers found that nearly three-quarters (72%) of the Android AI apps analyzed contained at least one encrypted secret embedded directly in the app code, and on average, each affected app leaked 5.1 secrets.
Hardcoded secrets are still common in Android AI apps
In total, the researchers identified 197,092 unique secrets across the entire data set, demonstrating that insecure coding practices remain widespread despite long-standing warnings.
More than 81% of all detected secrets were linked to Google Cloud infrastructure, including project IDs, API keys, Firebase databases, and storage buckets.
Of the encrypted Google Cloud endpoints detected, 26,424 were identified, although approximately two-thirds pointed to infrastructure that no longer existed.
Among the remaining endpoints, 8,545 Google Cloud storage buckets still existed and required authentication, while hundreds were misconfigured and left publicly accessible, possibly exposing more than 200 million files, totaling nearly 730TB of user data.
The study also identified 285 Firebase databases without any authentication checks, which together leaked at least 1.1 GB of user data.
In 42% of these exposed databases, researchers found tables labeled proof of concept, indicating prior compromise by attackers.
Other databases contained administrator accounts created with attacker-style email addresses, showing that the exploit was not theoretical but was already underway.
Many of these databases remained unsecured even after clear signs of intrusion, suggesting poor monitoring rather than one-time errors.
Despite the concern about AI features, leaked large language model API keys were relatively rare: only a small number of keys associated with major vendors such as OpenAI, Google Gemini, and Claude were detected in the entire data set.
In typical configurations, these leaked keys would allow attackers to send new requests, but would not provide access to stored conversations, historical messages, or previous responses.
Some of the most serious exposures involved live payments infrastructure, including leaked Stripe secret keys capable of granting full control over payment systems.
Other leaked credentials allowed access to communications platforms, analytics, and customer data, allowing for application spoofing or unauthorized data extraction.
These flaws cannot be mitigated with basic tools such as a firewall or malware removal tools once exposure has occurred.
The scale of data exposed and the number of apps already compromised suggest that analysis of app stores alone has not reduced systemic risk.
Follow TechRadar on Google News and add us as a preferred source to receive news, reviews and opinions from our experts in your feeds. Be sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp also.




