AI coding lowers the barrier to building software, but it also brings many engineering security problems to beginners and non-engineering users.
One of the most common incidents is pushing API Key, Secret, Token, database connection strings, or .env files to a public repository. Locally, these files may look like ordinary configuration that keeps the app running. Once they enter a public GitHub repository, they become credentials that can be scanned, called, and abused automatically.
Secret leaks are not rare. GitGuardian’s 2026 report says public GitHub commits in 2025 contained about 28.65 million new hardcoded credentials, and AI-service credential leaks grew 81% year over year. The issue is no longer just carelessness. AI coding, rapid prototyping, and public hosting are amplifying the scale.
Why Beginners Leak Keys More Easily
Many AI agents and small tools have two “repositories”: one on the local disk, and one visible to the world on GitHub. The problem is that beginners often do not understand the boundary between the two.
During local development, config.json, .env, and settings.yaml may contain API keys. After git add ., git commit, and git push, those files may be uploaded in full. Once a repository is public, scanning bots do not need to understand your business logic. They only need to match a secret pattern.
AI coding makes this worse:
- AI-generated examples may place
OPENAI_API_KEY = "sk-..."directly in source code. - Beginners often hardcode secrets in frontend code, scripts, or config files just to get the project running.
- Many vibe coding platforms can deploy apps directly without going through GitHub push protection.
- Users may not know which files, APIs, or default permissions exist inside an AI-generated project.
In short, AI can help you build something that runs faster. It does not automatically take over the security responsibility.
.gitignore Is Not Decoration
Git manages version history, GitHub hosts code, and .gitignore tells Git which files should not enter that history.
A basic AI project should at least ignore these:
|
|
But .gitignore alone is not enough. It only prevents untracked files from being added later. If a secret file has already been committed, adding it to .gitignore will not remove it from history.
A safer habit is:
- Create
.gitignoreat the beginning of a project. - Store API keys only in environment variables or local config.
- Provide
.env.examplewith placeholders, not real secrets. - Run a secret scanner before committing, such as
gitleaks,trufflehog, or GitHub Secret Scanning.
Deleting the File Is Not Enough
If a key has already been pushed to a public repository, the first reaction should not be “delete the file and commit again.” Revoke or rotate the key first.
Git records history. Even if the latest commit removes the file, old commits, forks, clones, caches, and scanners may still contain it. GitHub’s documentation also recommends revoking or rotating passwords, tokens, and credentials as the first step.
Recommended order:
- Revoke the old key in the provider console and create a new one.
- Check billing, usage logs, suspicious IPs, and unusual traffic.
- Remove hardcoded secrets and switch to environment variables or a secret manager.
- Clean sensitive files from repository history with
git filter-repoor BFG. - Enable GitHub Secret Scanning and Push Protection.
- Check CI/CD, deployment platforms, cloud functions, and frontend build artifacts for the old key.
For OpenAI, Anthropic, DeepSeek, cloud providers, payment services, email services, and databases, a leaked key can lead to more than unexpected bills. It may expose data, enable abuse, affect the supply chain, or get business accounts banned.
Real Secrets Do Not Belong in Frontend Code
Many beginners put API keys into frontend JavaScript because the page works:
|
|
This is effectively public. Browser code, network requests, source maps, and build artifacts can all be inspected. Any key that must remain secret should not appear on the client side.
The correct approach is to let the frontend call your own backend, and let the backend read environment variables and call the third-party API:
|
|
Then the server uses the environment variable:
|
|
This keeps the secret in the server environment instead of exposing it to every visitor.
Vibe Coding Does Not Remove Security Responsibility
Vibe coding is not only a GitHub leak problem. Many apps are published directly from AI coding platforms to the public internet, bypassing traditional code review, repository scanning, and security testing.
Recent RedAccess research found a large number of publicly accessible assets generated or hosted by AI coding tools, some exposing corporate data, personal information, or internal files. The lesson is simple: when “can deploy” becomes too easy, people often forget to ask “should this be public?”, “should this only be internal?”, and “does it have access control?”
Before publishing an AI-generated app, ask:
- Does this app really need public access?
- Does it have login, authentication, and permission isolation?
- Are database URLs, API keys, tokens, or webhook URLs exposed in frontend code?
- Are third-party API quota, domain, permission, and expiry limits configured?
- Can keys be disabled and deployments rolled back quickly after an incident?
AI-generated code still needs security review. The less code you personally wrote, the less you should assume it is safe.
Checks to Run Now
Start with your own GitHub account. Search your username together with:
|
|
If you find a real key, rotate first and clean up later. If it ever entered a public repository, treat it as leaked.
For future AI projects, use a fixed process:
- Write
.gitignorebefore writing business code. - Use
.env.exampleto document required variables. - Put all secrets in environment variables, not source code.
- Give API keys minimal permissions, quotas, and expiry dates.
- Enable GitHub Secret Scanning and Push Protection.
- Let AI help with a security review before publishing, but do not trust AI alone.
The danger of AI coding is not simply that it may write bad code. It gives many people the ability to publish unsafe apps to the public internet for the first time. Writing fast is not the problem. Handing out secrets, data, and permissions is.