The rapid democratization of software development through AI tools has created an unexpected cybersecurity crisis. While these platforms promise to allow anyone to build applications with ease, a new investigation reveals that thousands of these “vibe-coded” apps are hosting sensitive corporate and personal data on the open web—often with zero security protections.

Security researchers have found that the ease of creating web applications via AI has outpaced the security awareness of their creators, leading to a widespread exposure of private information, from medical records to corporate strategy documents.

A Massive Scale of Exposure

Dor Zvi and his team at cybersecurity firm RedAccess conducted a sweeping analysis of web applications built using popular AI coding platforms, including Lovable, Replit, Base44, and Netlify. Their findings were stark:

  • 5,000+ Vulnerable Apps: The researchers identified over 5,000 web applications that lacked any form of authentication or security measures.
  • 40% Exposed Sensitive Data: Approximately 40% of these apps contained sensitive information accessible to anyone who knew the URL.
  • Trivial Barriers: In many cases, the only “security” was a request for an email address, which could be bypassed with a fake entry.

The data exposed was not merely placeholder text. Screenshots verified by WIRED showed:
* Hospital work assignments containing doctors’ personally identifiable information (PII).
* Detailed corporate ad purchasing logs and go-to-market strategies.
* Full transcripts of customer chatbot interactions, including names and contact details.
* Shipping cargo records and financial sales data.

In some severe instances, the exposed applications allowed attackers to gain administrative privileges, effectively handing control of the system to strangers.

Why This Happens: The “Vibe Coding” Trap

The term “vibe coding” refers to the process of building software using natural language prompts with AI, rather than traditional programming. While this lowers the barrier to entry, it introduces a critical flaw: users often lack the security expertise to configure their apps safely.

Joel Margolis, a security researcher who previously uncovered a similar breach involving an AI chat toy, explains the core issue:

“Somebody from a marketing team wants to create a website. They’re not an engineer and they probably have little to no security background or knowledge. AI coding tools do what you ask them to do. And unless you ask them to do it securely, they’re not going to go out of their way to do that.”

The platforms themselves often host these apps on their own domains (e.g., app-name.lovable.app ), making them easily discoverable via search engines. Zvi’s team used simple Google and Bing searches targeting these domains to find the vulnerabilities, highlighting how exposed these applications are to automated scanning by malicious actors.

Industry Response: User Error or Platform Failure?

When confronted with these findings, the AI coding companies pushed back, arguing that the breaches were due to user configuration choices rather than platform vulnerabilities.

  • Replit: CEO Amjad Masad stated that public accessibility is “expected behavior” for apps set to public, noting that privacy settings can be changed with a single click.
  • Lovable: A spokesperson emphasized that while they provide secure building tools, the final configuration is the creator’s responsibility.
  • Base44 (Wix): Blake Brodie, head of PR for Base44’s parent company, argued that disabling security controls is a “deliberate, straightforward action” by the user. She also questioned the validity of the data, suggesting some examples might be test sites with AI-generated dummy data.

However, RedAccess disputed this narrative. They provided anonymized communications showing users thanking the researchers for alerts that led them to secure or take down their exposed apps. Furthermore, Zvi noted that they found numerous phishing sites impersonating major brands like Bank of America, Costco, and McDonald’s hosted on Lovable’s domain, suggesting a broader ecosystem of abuse.

The Broader Implications

This incident mirrors the Amazon S3 storage bucket breaches of previous years, where misconfigured cloud storage led to massive data leaks for companies like Verizon and WWE. The cybersecurity industry is now divided on responsibility:
1. User Responsibility: Platforms argue they provide the tools, and users must use them correctly.
2. Platform Responsibility: Critics argue that default settings should prioritize security, especially when the user base includes non-technical individuals who may not understand the risks.

Zvi warns that the 5,000 apps found on platform domains are just the tip of the iceberg. Thousands more likely exist on custom domains, hidden from public search but equally vulnerable. As AI coding becomes mainstream, the gap between ease of creation and security implementation threatens to become one of the most significant data privacy challenges of the decade.


Conclusion: The rise of AI-assisted development has democratized coding but has inadvertently created a wild west of insecure applications. Until security becomes a default, non-negotiable feature of these platforms, organizations and individuals risk exposing their most sensitive data to the open web with every click.