Originally published on opena2a.org

Your AI Coding Tools Are Leaking Your API Keys

OpenA2A Team|
#credentials#ai-coding-tools#security#secretless#api-keys

AI coding assistants have broad access to your development environment. They read .env files, scan terminal history, parse MCP server configurations, and process every file in your project. This means your API keys, database credentials, and cloud tokens are routinely loaded into AI context windows.

How Credentials Enter AI Context

Modern AI coding tools operate by reading your project files to understand context. This is what makes them useful -- but it also means they ingest credentials stored anywhere in your project tree. The most common exposure paths:

  • .env files -- The most direct path. AI tools read these to understand your project configuration.
  • MCP server configs -- Tools like Claude Desktop store server configurations with embedded tokens in JSON files.
  • Shell history -- Commands containing curl -H "Authorization: Bearer sk-..." persist in history files.
  • Hardcoded values -- API keys in source code, test fixtures, or configuration files.

Once a credential is in the AI context window, it can appear in generated code, be included in error reports, or be transmitted to the AI provider's API. Even if the provider discards it, the credential has left your local environment.

Protecting Credentials Without Breaking Workflow

The goal is to keep credentials out of AI context while preserving the development experience. Three approaches, from simplest to most comprehensive:

1. Environment variable references

Replace hardcoded values with environment variable references. AI tools see process.env.API_KEY instead of the actual key.

// Before: credential in source code
const client = new Anthropic({ apiKey: "sk-ant-..." });

// After: environment variable reference
const client = new Anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });

2. File-level exclusion

Configure your AI tools to skip sensitive files entirely. Most tools respect .gitignore patterns, and some support additional exclusion lists.

# .gitignore -- prevents AI tools from reading these files
.env
.env.*
*.key
*.pem
.aws/credentials

3. Automated credential scanning

Run credential detection as part of your development workflow to catch exposures before they reach AI context:

# Scan your project for exposed credentials
$ opena2a review

# Or use the secretless-ai package
$ npx secretless-ai scan

The Broader Pattern

This is not a flaw in any specific AI tool -- it is a consequence of how AI coding assistants work. They need broad project access to be useful. The mitigation is to structure your project so that credentials are never stored in files the AI reads. Environment variables, secret managers, and file exclusion patterns all accomplish this without reducing the usefulness of the tools.

Read the full post with detailed mitigation steps on opena2a.org.

OpenA2A is building open security infrastructure for AI agents. Follow our progress at opena2a.org.