Secrets Management 101: Stop Storing Credentials in .env Files
Learn why .env files are a security risk - especially with AI coding agents - and how to implement proper secrets management with tools like Vault, AWS Secrets Manager, and Doppler.
That .env file in your project directory? It's a security incident waiting to happen. Every startup begins the same way: API keys in environment files, database passwords copy-pasted between teammates, and a collective hope that nobody accidentally commits secrets to GitHub.
This approach works until it doesn't. And when it fails, the consequences range from embarrassing (a leaked Stripe test key) to catastrophic (production database credentials on a public repository).
In this guide, you'll learn why .env files create security risks, how to evaluate secrets management tools, and practical implementation patterns for your tech stack. We'll also cover what auditors look for when evaluating secrets management for SOC 2 compliance.
Why .env Files Are a Security Risk
The .env file pattern became popular because it's simple: define key-value pairs, load them into your environment, and reference them in code. Libraries like dotenv for Node.js and python-dotenv made this trivially easy.
But simplicity comes with trade-offs.
Accidental Commits to Version Control
According to GitGuardian's 2024 State of Secrets Sprawl Report, nearly 12.8 million new secrets were detected in public GitHub commits in 2023 alone. Despite .gitignore files, secrets still leak through:
- Developers forgetting to add
.envto.gitignorein new projects - IDE auto-complete suggesting
.envfiles for commits .env.examplefiles containing real values instead of placeholders- Backup files like
.env.backupor.env.localnot being ignored
Once a secret is committed, it exists in git history forever unless you rewrite history - a disruptive operation that many teams avoid.
Prevention tip: Use pre-commit hooks with secrets scanning tools like detect-secrets, gitleaks, or trufflehog to catch secrets before they're committed.
No Access Control
A .env file is readable by anyone who can access the file system. There's no distinction between a junior developer who needs one API key and a senior engineer who needs database credentials. Everyone sees everything.
This violates the principle of least privilege - a fundamental security concept and a requirement for compliance frameworks like SOC 2. Secrets managers like Vault and AWS Secrets Manager let you define granular policies, such as allowing a service to read only the specific secrets it needs rather than all secrets in an account.
No Audit Trail
When something goes wrong, you need to answer: Who accessed this secret? When? From where? With .env files, you can't. There's no logging, no access history, and no way to detect if credentials were exfiltrated.
Rotation Complexity
Rotating a secret stored in .env files means:
- Generating a new credential
- Updating the file on every server, container, and developer machine
- Coordinating deployment timing to avoid downtime
- Hoping nobody missed the memo
This friction means secrets don't get rotated as often as they should - or at all.
The AI Coding Agent Risk (2025+)
Here's a risk that didn't exist two years ago: AI coding assistants now read your entire codebase - including your .env files.
Tools like GitHub Copilot, Cursor, Claude Code, and other AI-powered development assistants need context to be useful. That context often includes configuration files, environment variables, and yes, your secrets. This creates two distinct security concerns.
Double exposure to third parties. When an AI assistant reads your .env file, those credentials are transmitted to the AI provider's servers (OpenAI, Anthropic, Google, etc.) as part of the context window. Even if these providers have strong security practices and data handling policies, you've now expanded your trust boundary. Your HubSpot API key isn't just on your laptop and your servers - it's been processed by a third-party AI system.
Autonomous credential usage. Modern AI coding agents don't just read code - they execute it. An agent helping you debug an API integration might:
- Read your
.envfile to understand the configuration - Make actual API calls using your production credentials
- Interact with databases, payment systems, or external services
- All without explicit approval for each action
Consider this scenario: you ask an AI agent to "fix the HubSpot sync issue." The agent reads your .env, sees HUBSPOT_ACCESS_TOKEN, and starts making API calls to diagnose the problem. It might query contacts, update records, or trigger workflows - using your production credentials, potentially without you realizing each action it takes.
Mitigation. This is where proper secrets management becomes essential beyond traditional concerns:
- Secrets in a vault (not
.env) are never exposed to AI context windows - Runtime secret injection means agents can't read credentials from files
- API keys with narrow scopes limit blast radius if an agent does access them
- Audit logs reveal if AI-driven processes accessed secrets unexpectedly
Some teams are adopting "AI-safe" development practices: separate credential stores that are explicitly excluded from AI context, read-only API keys for development, and sandbox environments where AI agents can operate without production access.
The bottom line: .env files were already a liability. AI coding assistants have made them an active attack surface in your daily workflow.
Secrets Management Tools Comparison
Modern secrets management tools solve these problems by centralizing credential storage with proper access controls, audit logging, and rotation capabilities.
Here's how the major options compare:
HashiCorp Vault
Best for: Organizations with complex requirements, multi-cloud environments, or regulatory needs demanding granular control.
Vault is the most powerful and flexible option. It supports dynamic secrets (credentials generated on-demand with automatic expiration), multiple authentication methods, and fine-grained access policies. The trade-off is complexity - Vault requires dedicated infrastructure and operational expertise.
Pricing: Vault Community Edition is source-available under the Business Source License (BSL). Enterprise pricing varies by deployment.
AWS Secrets Manager
Best for: Teams already running on AWS who want native integration.
AWS Secrets Manager integrates tightly with other AWS services. IAM policies control access, CloudTrail provides audit logging, and built-in rotation works with RDS, Redshift, and DocumentDB. If you're AWS-native, this is often the path of least resistance.
Pricing: $0.40 per secret per month, plus $0.05 per 10,000 API calls.
1Password (Secrets Automation)
Best for: Small teams already using 1Password who want a simple path to better secrets management.
1Password's Secrets Automation extends their password manager to infrastructure. It's less powerful than Vault but dramatically simpler to set up. The CLI and SDKs integrate with CI/CD pipelines and application code.
Pricing: Business tier starts at $7.99/user/month; Secrets Automation has its own billing model. Check 1Password's pricing page for current rates.
Doppler
Best for: Startups wanting a developer-friendly solution with minimal setup.
Doppler focuses on developer experience. The CLI syncs secrets across environments, the dashboard shows secret usage across projects, and integrations cover most deployment platforms. It's designed for teams who want proper secrets management without becoming security experts.
Pricing: Free tier available; paid tiers vary. Check Doppler's pricing page for current rates.
Quick Comparison
| Feature | Vault | AWS Secrets Manager | 1Password | Doppler |
|---|---|---|---|---|
| Setup Complexity | High | Medium | Low | Low |
| Best For | Enterprise | AWS-native teams | Small teams | Startups |
| Dynamic Secrets | Yes | Limited | No | No |
| Audit Logging | Yes | Yes (CloudTrail) | Yes | Yes |
| Auto-rotation | Yes | Yes (AWS services) | No | Yes |
Implementation Patterns for Different Tech Stacks
Moving from .env files to a secrets manager requires code changes. Here are patterns for common stacks.
Node.js / JavaScript
Before (with dotenv):
require('dotenv').config();
const dbConnection = {
host: process.env.DB_HOST,
password: process.env.DB_PASSWORD
};
After (with AWS Secrets Manager):
const { SecretsManagerClient, GetSecretValueCommand } = require('@aws-sdk/client-secrets-manager');
const client = new SecretsManagerClient({ region: 'us-east-1' });
async function getDbCredentials() {
const command = new GetSecretValueCommand({ SecretId: 'prod/database' });
const response = await client.send(command);
return JSON.parse(response.SecretString);
}
// Initialize at startup
const dbCredentials = await getDbCredentials();
After (with Doppler):
# In your deployment script or Dockerfile
doppler run -- node app.js
Doppler injects secrets as environment variables, so your code can remain unchanged - but now those values come from a secure, audited source rather than a file.
Python
Before (with python-dotenv):
from dotenv import load_dotenv
import os
load_dotenv()
api_key = os.getenv('API_KEY')
After (with AWS Secrets Manager via boto3):
import boto3
import json
def get_secret(secret_name):
client = boto3.client('secretsmanager', region_name='us-east-1')
response = client.get_secret_value(SecretId=secret_name)
return json.loads(response['SecretString'])
credentials = get_secret('prod/api-credentials')
api_key = credentials['api_key']
Docker and Kubernetes
For containerized applications, avoid baking secrets into images or passing them as environment variables in plain text.
Kubernetes Secrets (basic):
apiVersion: v1
kind: Secret
metadata:
name: db-credentials
type: Opaque
data:
password: <base64-encoded-value>
Important security note: Kubernetes Secrets are base64-encoded, not encrypted. By default, they are stored as plaintext in etcd. For production environments, you should:
- Enable encryption at rest for etcd
- Use RBAC to restrict Secret access
- Consider external secrets managers (as shown below)
External Secrets Operator (recommended):
The External Secrets Operator syncs secrets from external providers (AWS, Vault, etc.) into Kubernetes Secrets:
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: db-credentials
spec:
refreshInterval: 1h
secretStoreRef:
name: aws-secrets-manager
kind: SecretStore
target:
name: db-credentials
data:
- secretKey: password
remoteRef:
key: prod/database
property: password
This keeps your secrets manager as the source of truth while giving Kubernetes workloads access through native Secret objects.
Rotating Secrets Safely
Secret rotation shouldn't cause downtime. The key is supporting multiple valid credentials during the transition period.
The Dual-Write Pattern
- Generate new credential in your secrets manager
- Update the service to accept both old and new credentials
- Deploy applications with the new credential
- Monitor for any systems still using the old credential
- Revoke the old credential once all systems have migrated
For database passwords, this means creating a new user or updating the password while the old one remains valid temporarily.
Automation with Secrets Managers
AWS Secrets Manager supports automatic rotation for supported services. You define a rotation Lambda function, set a schedule, and the service handles the complexity.
HashiCorp Vault's dynamic secrets take this further: credentials are generated on-demand with automatic expiration. Your application requests credentials, uses them, and they expire automatically - no rotation needed because nothing is long-lived.
Emergency Credential Rotation Procedures
When you suspect a credential has been compromised, speed matters. Have a runbook ready before you need it.
Emergency Rotation Runbook Template
- Confirm the incident - Verify the credential was actually exposed
- Assess blast radius - What can this credential access?
- Rotate immediately - Generate new credentials in your secrets manager
- Deploy updates - Push new credentials to all dependent systems
- Revoke old credential - Disable the compromised credential
- Audit access logs - Determine if the credential was used maliciously. Assume malicious use until proven otherwise - check for unauthorized data access, configuration changes, or privilege escalation during the window of exposure.
- Document and report - Create an incident report for compliance records
Communication Checklist
- Notify your security team immediately
- Inform engineering leadership
- If customer data may be affected, loop in legal and communications
- For SOC 2, document the incident and your response
Audit Logging for Secret Access
Proper audit logging answers the questions investigators ask after an incident.
What to Log
- Who accessed the secret (user or service identity)
- When the access occurred (timestamp)
- What secret was accessed (secret identifier)
- Where the request originated (IP address, service)
- Why it was accessed (if context is available)
Implementation
Most secrets managers provide this automatically:
- AWS Secrets Manager logs to CloudTrail
- HashiCorp Vault has built-in audit devices
- Doppler provides access logs in the dashboard
Retention and Alerting
For SOC 2 audits, industry best practice is to retain audit logs for at least one year (SOC 2 does not mandate a specific duration, but auditors typically expect this). Set up alerts for anomalies:
- Access from unexpected IP addresses
- Unusual access patterns (volume or timing)
- Failed access attempts
How Auditors Evaluate Secrets Management
For SOC 2 Type II audits, secrets management falls primarily under CC6.1 (Logical and Physical Access Controls) of the Trust Services Criteria.
What Auditors Look For
- Inventory of secrets - Do you know what secrets exist and where they're stored?
- Access controls - Is access restricted based on job function (least privilege)?
- Encryption - Are secrets encrypted at rest and in transit?
- Audit logging - Can you demonstrate who accessed what and when?
- Rotation policy - Do you have a policy and evidence of regular rotation?
- Incident response - What's your process when credentials are compromised?
Common Audit Findings
- Secrets stored in code repositories (even if in
.gitignore) - Shared credentials across team members
- No evidence of rotation in the audit period
- Missing or incomplete access logs
- No documented policy for secrets management
Remediation
Start with documentation: write a secrets management policy that defines where secrets can be stored, how access is granted, and rotation requirements. Then implement tooling to enforce the policy and generate evidence.
Key Takeaways
.envfiles lack access control, audit logging, and rotation capabilities - they're convenient but insecure for production use.AI coding agents amplify the risk. Secrets in
.envfiles are now exposed to third-party AI providers and can be used autonomously by agents without explicit approval. Proper secrets management keeps credentials out of AI context windows entirely.Choose tooling based on your context: Doppler or 1Password for small teams, AWS Secrets Manager for AWS-native shops, Vault for complex enterprise requirements.
Migration doesn't have to be all-or-nothing. Start with your most sensitive credentials (database passwords, payment API keys) and expand from there.
Automate rotation where possible. Manual rotation doesn't happen as often as it should.
Audit logging is non-negotiable for compliance and incident response.
Document your policies before the auditor asks. Having a written secrets management policy demonstrates security maturity.
Implementing secrets management is one piece of a comprehensive security program. If you're preparing for SOC 2 or ISO 27001 certification, Bastion can help you implement the right controls and gather evidence for your audit.
Share this article
Related Articles
Nx Supply Chain Attack Exposes Thousands of Developer Credentials on Github - What you should do to keep your organization secure
In August 2025, attackers compromised popular Nx npm packages, embedding malware that stole developer credentials and published them openly on GitHub. Millions risk exposure, from API keys to cloud access tokens. Organizations must urgently rotate credentials, update dependencies, audit logs, and adopt stricter supply chain security practices.
MDM for Startups: Why We Built a Security-First Solution
We built an MDM that gives startups real device security (encryption, remote wipe, inventory) without enterprise bloat, reducing risk, simplifying compliance, and avoiding yet another vendor.
ISO 42001: Do You Need It If You Only Use AI APIs?
Do you need ISO 42001 if you only use AI APIs? Learn the key differences between AI developers and AI consumers for compliance.
Learn More About Compliance
Explore our guides for deeper insights into compliance frameworks.
What is an Information Security Management System (ISMS)?
An Information Security Management System (ISMS) is at the heart of ISO 27001 certification. Understanding what an ISMS is and how to build one is essential for successful certification. This guide explains everything you need to know.
Security Update Management: Staying Protected
Security update management (also known as patch management) is about keeping software current and protected against known vulnerabilities. When a vulnerability is discovered and publicised, attackers often develop exploits quickly. Timely patching is one of the most effective ways to protect your organisation.
ISO 27017 and ISO 27018: Cloud Security Standards
ISO 27017 and ISO 27018 extend ISO 27001 with specific guidance for cloud computing environments. Understanding these standards helps cloud service providers and their customers address cloud-specific security and privacy requirements.
Other platforms check the box
We secure the box
Get in touch and learn why hundreds of companies trust Bastion to manage their security and fast-track their compliance.
Get Started