AI Vyuh aivyuh
Vibe CodingAI SecurityCode QualityAI Vyuh

Vibe Coding Security Risks: What the Data Says

53% of AI-generated code ships with vulnerabilities. 35 CVEs in one month. Here's what vibe coders need to know — and how to ship safely.

Atin Agarwal ·

The vibe coding explosion

You shipped your MVP in a weekend. Your AI coding tool wrote 95% of it. It works, it looks great, and your first users are signing up. But there’s a problem you probably haven’t thought about yet — and the data says it’s a big one.

Vibe coding — building software by describing what you want in plain English and letting AI write the code — went from a niche experiment to a $4.7 billion market in under two years. Collins Dictionary named it Word of the Year in 2025. By early 2026, 46% of all code on GitHub is AI-generated, and 25% of Y Combinator’s Winter 2025 cohort reported codebases that are 95% AI-written.

If you’re a founder shipping your MVP with Cursor, Replit, or Bolt, you’re not alone. An estimated 63% of vibe coders are non-developers — people with ideas and the ability to describe what they want, but no formal programming background. The barrier to building software has never been lower.

That’s the good news. Here’s the bad news.


The data nobody’s talking about

The speed of vibe coding is real. The security of vibe-coded applications is not keeping up.

53% of developers who shipped AI-generated code later discovered security vulnerabilities that passed their initial review. That’s not a hypothetical risk — it’s a self-reported finding from developers who thought their code was fine when they shipped it.

The hard numbers are worse:

The AI gets better at writing code that works. It does not get better at writing code that’s safe.

35 CVEs in a single month

Georgia Tech’s SSLab runs the “Vibe Security Radar,” a project tracking CVEs (publicly disclosed security flaws) directly introduced by AI coding tools. In March 2026 alone, they documented 35 new CVEs attributed to AI-generated code — up from 6 in January and 15 in February. The real number, they estimate, is 5–10x higher across the open-source ecosystem.

That’s not an academic concern. Those are real vulnerabilities in real software that real people are using.

The Moltbook incident

On January 28, 2026, a developer launched Moltbook — an “AI social network” — and publicly announced he “didn’t write a single line of code.” The app was entirely vibe-coded.

Within three days, security researchers found:

  • 1.5 million API authentication tokens exposed
  • 35,000 email addresses leaked
  • 4,060 private conversations accessible — some containing third-party OpenAI API keys
  • Approximately 4.75 million database records exposed to anyone who looked

The root cause? A single misconfigured database setting. The AI generated a Supabase integration without enabling Row Level Security — a one-checkbox configuration that any database tutorial covers. But the developer didn’t know to ask for it, and the AI didn’t think to add it.

The incident was widely reported — The Hill, SecurityBrief, and others covered it. Moltbook became a case study in what happens when vibe-coded applications ship without a security review.

5,600 vibe-coded apps scanned

A 2026 study by Escape.tech scanned approximately 5,600 applications built with AI coding tools and found:

  • 2,000+ security vulnerabilities
  • 400+ exposed secrets (API keys, credentials, tokens hardcoded into source)
  • 175 instances of personally identifiable information exposed through application endpoints

That’s roughly one vulnerability for every three apps scanned. And these were apps their developers considered ready to ship.


The 5 biggest security risks in vibe-coded apps

If you’re building with AI and you’re not a security expert, here are the five things most likely to go wrong — explained without the jargon.

1. Exposed secrets and API keys

AI coding tools routinely hardcode API keys, database passwords, and authentication tokens directly into your source code. When you push that code to GitHub (which most tutorials tell you to do), those secrets become publicly searchable. Attackers have automated bots that scan GitHub for exactly this.

What happens: Someone finds your OpenAI API key in your code, racks up thousands of dollars in charges on your account, and you don’t find out until the bill arrives.

2. Missing input validation

When a user types something into your app, that input needs to be checked and sanitized before your app does anything with it. AI-generated code frequently skips this step. This opens the door to cross-site scripting (XSS), where an attacker injects malicious code through your input fields, and SQL injection, where an attacker manipulates your database through form fields.

What happens: An attacker types a specially crafted string into your search bar, and suddenly they can read every user’s data in your database.

3. Broken authentication and session handling

AI tools often generate authentication code that looks functional but has subtle flaws — sessions that never expire, tokens stored insecurely, password reset flows that can be bypassed. The code works during testing because you’re testing the happy path.

What happens: A user logs in once, and their session token works forever — even after they change their password. An attacker who intercepts that token has permanent access.

4. Insecure dependencies

When AI generates code, it pulls in third-party packages (libraries other developers have published). It often picks outdated or vulnerable versions, and sometimes references packages that don’t exist — which attackers can then create with malicious code.

What happens: Your AI-generated code imports a package with a known vulnerability. Attackers already have exploit tools for it. Your app is compromised before you even know the dependency exists.

5. Missing error handling that leaks information

When something goes wrong in your app, the error messages your users see matter. AI-generated code often returns raw error messages that reveal your database structure, internal file paths, API endpoints, and sometimes credentials.

What happens: Your app crashes, and the error page shows a stack trace that tells an attacker exactly what database you’re using, what version it’s running, and where your data is stored.


The cost of fixing later vs. building securely

There’s a well-known rule in software engineering: the later you find a bug, the more expensive it is to fix.

For vibe coders shipping fast, the math is especially brutal:

When you catch itRelative costWhat it looks like
While prompting the AI1xReprompt with security requirements. Free.
During development5–10xRewrite the component, retest adjacent features
After launch (no breach)20–50xAudit, patch, redeploy, notify affected users
After a breach100–1,000xLegal, compliance penalties, lost customers, reputation damage

The Moltbook founder spent approximately zero time on security during development. The cleanup — legal exposure, user notification, public reputation damage — will cost orders of magnitude more than a security review would have.

The cheapest security fix is the one you build into your AI prompts from the start. (And if you want a professional review, an AI agent security assessment catches what prompting alone misses.)


Security-aware prompting: 7 patterns that make your AI write safer code

This is the part no one else is covering. You can significantly improve the security of AI-generated code by changing how you prompt. These patterns work with Cursor, Replit, Bolt, Copilot, and any LLM-based coding tool.

Pattern 1: Demand input validation upfront

“Implement input validation for all user-facing inputs. Sanitize against XSS, validate types and lengths, and reject unexpected input before processing.”

Don’t assume the AI will add validation. It almost never does unless you explicitly ask.

Pattern 2: Ban hardcoded secrets

“Never hardcode API keys, database credentials, or tokens. Use environment variables for all secrets. Add a .env.example file with placeholder values and ensure .env is in .gitignore.”

This single prompt addition would have prevented the Moltbook breach.

Pattern 3: Require authentication on every endpoint

“Every API endpoint must check authentication before processing the request. Unauthenticated requests should return 401. Include role-based access control where different user types exist.”

AI-generated backends often create endpoints that work but don’t check who is calling them.

Pattern 4: Ask for error handling that hides internals

“Implement error handling that returns user-friendly error messages. Never expose stack traces, database errors, file paths, or internal configuration in responses. Log detailed errors server-side only.”

This prevents your error messages from becoming a roadmap for attackers.

Pattern 5: Specify secure session management

“Use secure, httpOnly, sameSite cookies for session tokens. Set session expiration to 24 hours. Invalidate sessions on password change and logout. Use a cryptographically secure random token generator.”

Session handling is where “it works in testing” diverges from “it’s secure in production.”

Pattern 6: Force dependency hygiene

“Only use well-maintained, widely-adopted packages with no known critical vulnerabilities. Pin exact versions in package.json/requirements.txt. Do not use deprecated packages.”

This reduces the risk of the AI pulling in vulnerable or nonexistent packages.

Pattern 7: Request a security self-review

“After generating the code, review it for OWASP Top 10 vulnerabilities. List any security concerns you identify and suggest fixes.”

LLMs are surprisingly good at spotting security issues when you explicitly ask them to look. The problem is that nobody asks.

Important caveat: Prompting alone is not enough. These patterns reduce risk significantly, but they don’t eliminate it. You still need to run automated security scans before shipping — which is exactly what tools like Vibe Code QA are built for.


The vibe coder’s security checklist

Ten things to check before you ship. No enterprise jargon. No security degree required.

  • 1. Secrets in environment variables, not in code. Search your codebase for any API key, password, or token that’s hardcoded. Move them all to .env and add .env to your .gitignore.

  • 2. Input validation on every form and API endpoint. If a user can type something in, your app should check it before using it. Ask your AI to add validation if it didn’t.

  • 3. Authentication on every backend route. Every API endpoint should verify who’s calling it. If you can access data by pasting a URL into your browser without logging in, that’s a problem.

  • 4. HTTPS everywhere. Your site should use HTTPS (not HTTP). Most hosting platforms do this automatically, but verify it. Mixed content (loading HTTP resources on an HTTPS page) breaks this protection.

  • 5. Database access controls enabled. If you’re using Supabase, enable Row Level Security. If you’re using Firebase, check your security rules. If you’re using a raw database, ensure it’s not publicly accessible.

  • 6. Dependencies are up to date. Run npm audit (Node.js) or pip audit (Python) or the equivalent for your stack. Fix critical and high vulnerabilities before shipping.

  • 7. Error messages don’t leak internals. Trigger some errors in your app and check what the user sees. If you see stack traces, database names, or file paths, fix your error handling.

  • 8. Rate limiting on login and API endpoints. Without rate limits, an attacker can try thousands of passwords per second or drain your API quota. Most frameworks have rate-limiting middleware — ask your AI to add it.

  • 9. No sensitive data in client-side code. View your page source in the browser. If you see API keys, admin URLs, or internal configuration, that data is visible to everyone.

  • 10. Run an automated security scan. Before you launch, run your code through an automated scanner. Vibe Code QA is built specifically for AI-generated code — it catches the patterns that AI tools consistently get wrong.

Print this list. Tape it next to your monitor. Check every item before every deploy. It takes 30 minutes and could save you from becoming the next Moltbook.


FAQ

Is vibe coding safe?

Vibe coding itself isn’t inherently unsafe — but the code it produces often is. Studies show 40–62% of AI-generated code contains security vulnerabilities. The risk isn’t in using AI to write code; it’s in shipping that code without review. With security-aware prompting and a basic security checklist, you can significantly reduce the risk.

Can vibe-coded apps get hacked?

Yes, and they already are. The Moltbook incident in January 2026 exposed 1.5 million API tokens and 35,000 email addresses from an entirely AI-coded application. A 2026 scan of 5,600 vibe-coded apps found 2,000+ vulnerabilities and 400+ exposed secrets. Vibe-coded apps are targets because attackers know AI tends to produce the same patterns of insecure code.

How do I secure vibe-coded applications?

Start with three steps: (1) use security-aware prompts that explicitly request input validation, secure authentication, and environment variables for secrets; (2) follow a basic security checklist before every deploy — check for hardcoded secrets, verify authentication on all endpoints, and enable database access controls; (3) run an automated code quality check designed for AI-generated code before you ship.


Ship fast, stay secure

Vibe coding is here to stay. The $4.7 billion market is growing at 38% annually, and the tools are only getting more powerful. The founders who win won’t be the ones who avoid AI coding — they’ll be the ones who learn to use it without leaving the front door open.

The playbook is simple: prompt securely, check the checklist, scan before you ship.

Need an automated security check for your vibe-coded app? Try Vibe Code QA →

Want a comprehensive security assessment for your AI-powered product? Talk to AI Vyuh Security →

Building with AI agents? AI Vyuh is building the infrastructure layer for the AI agent economy — from security to code quality to cost optimization.


The vulnerabilities in vibe-coded apps are a subset of a broader pattern. Our analysis of the AI-generated code quality crisis covers the five failure patterns we see across production codebases — from architecture debt to dependency roulette.

Want to see what a systematic scan actually finds? We scanned our own codebase with our Code QA agents and discovered 406 findings in 35 seconds. And if your vibe-coded app includes AI agents, they need their own security assessment — traditional pentests miss 85% of the agent-specific attack surface.