Google Play’s AI app exposes cloud secrets and user data, report complaint

The experts are warned against blind use of AI in coding. Hardcoding secrets is a concept that is considered one of these concerns because it involves embedding sensitive data such as API keys, passwords, and cryptographic tokens directly into source code or configuration files. You would expect AI-powered apps built on modern cloud stacks and touted as cutting-edge to be more knowledgeable. New research suggests many people still don’t.

Cyber ​​News analyzes AI apps google An investigation into the Play Store found that “many companies leak hard-coded secrets and cloud endpoints, putting users at risk or even allowing attackers to empty their digital wallets.”

According to the report, “72% of apps analyzed contained at least one hard-coded secret. AI apps leaked 5.1 secrets on average, and 81.14% of discovered secrets were related to Google Cloud project identifiers, endpoints, and API keys.”

CyberNews found that hundreds of AI apps have already been compromised. Researchers identified 285 Firebase instances that had no authentication at all, meaning anyone could access them. These databases alone exposed 1.1 GB of user data.

In 42% of cases, researchers found database tables explicitly labeled as “poc,” an abbreviation for proof of concept. Some databases contained administrator accounts with emails such as: attacker@evil.com. “The indicators of compromise detected indicate that the problem of automated exploitation of misconfigured Firebase databases is widespread,” the CyberNews team said, adding that many of these systems appear to have “little monitoring.”

Cloud storage exposure has grown even larger. The settings are incorrect google cloud Storage buckets linked to AI apps exposed over 200 million files, totaling nearly 730 TB of data. On average, each published bucket contained 1.55 million files and 5.5TB of data.

Not all leaked secrets pose the same level of risk, but some clearly cross the line. Researchers found that the credentials were tied to messaging and engagement platforms such as Twitter, Intercom, and Braze, which could allow attackers to impersonate apps or interact directly with users. Analytics and monitoring APIs exposed internal logs and performance data.

The most severe cases were related to financial infrastructure. Exposed keys linked to payment and reward systems can be misused to manipulate transactions and loyalty balances. The highest risk level was the Stripe Live Private Key, which provides full control of the payments backend, including billing users and rerouting funds.

One of the surprising discoveries was that researchers I didn’t See more. Despite its focus on AI, LLM API keys have been relatively rare. Even when they did appear, their impact was limited. As the researchers noted, a compromised LLM key typically allows an attacker to submit new requests, but does not give them access to past conversations or saved prompts.

The overall dataset also had a more subtle problem with insufficient cleanup. Researchers discovered 26,424 hardcoded Google Cloud endpoints, but nearly two-thirds pointed to resources that no longer existed. Although deleted projects and abandoned buckets do not directly leak data, they indicate weak security hygiene and generate noise that attackers can exploit.

Importantly, this android-The only problem. CyberNews previously scanned 156,000 iOS When we searched the apps, we found a nearly identical pattern, with 70.9% containing hard-coded secrets and hundreds of terabytes of public data.

Vibe coding is everywhere, but experts warn it leaves security holes in apps

The main security risk is not that the AI ​​will write malicious code, but that the AI ​​has limited memory and can forget important safety measures as the project grows.

Latest Update