Exposure of Sensitive Information Through Environment Variables
Exposure of Sensitive Information Through Environment Variables occurs when applications store sensitive information in plaintext environment variables. Environment variables are accessible to all processes running with the same execution context, including child processes, dependencies, and in cloud environments, to other functions and containers.
Developers use environment variables to configure applications without hardcoding credentials—a good practice. But if those variables contain unencrypted passwords, API keys, database credentials, and tokens, they become a security liability accessible to any process that can read the environment.
Real-World Attack Scenarios
Scenario 1: Process Inspection on Shared Server
A shared hosting environment runs multiple applications as the same user. An attacker compromises one application and uses ps or /proc to inspect other running processes:
Vulnerable setup:
# Application 1 - Vulnerable to compromiseAPP1_DB_PASSWORD=production_pass_123nodeapp1.js# Application 2 - Running as same userAPP2_API_KEY=sk_live_4eC39HqLyjWDarhtnodeapp2.js# Application 3APP3_STRIPE_SECRET=rk_live_51234567890pythonapp3.py
The attack:
# After compromising app1, attacker runs:psaux|grep-E"APP|KEY|PASSWORD"# Output:# user 1234 /usr/bin/node app1.js# user 1235 /usr/bin/node app2.js APP2_API_KEY=sk_live_4eC39HqLyjWDarht# user 1236 /usr/bin/python app3.py APP3_STRIPE_SECRET=rk_live_51234567890# Or directly read /proc:cat/proc/1235/environ|tr'\0''\n'|grepAPI_KEY
The attacker now has API keys and secrets from all running applications.
Finding it: Check how environment variables are set. On shared systems, test if you can inspect other processes' environment. Look for sensitive data in process listings.
Exploit:
Scenario 2: Environment Variables in Logs and Error Messages
An application logs its configuration during startup:
The logs contain:
If logs are accessible (via log aggregation services, exposed log files, or log storage), the attacker has all credentials.
The attack vectors:
Reading application log files directly
Accessing log aggregation systems (ELK, Splunk, CloudWatch)
Exploiting log forwarding services with weak authentication
Finding logs in backup files or archives
Searching git history for log outputs
Finding it: Check application startup logs. Look for environment variable dumps. Test if configuration is logged. Review what gets logged during errors.
Exploit:
Scenario 3: Child Process Inheritance in Dependency Injection
An application uses environment variables for configuration and executes third-party tools or dependencies:
A compromised dependency (like ImageMagick) or vulnerability in the executed command can access all environment variables:
The attack:
Application calls third-party tool with inherited environment
Attacker compromises or exploits the third-party tool
Malicious code reads environment variables
Credentials are exfiltrated
Finding it: Identify where applications spawn child processes. Check what environment variables are inherited. Look for dependencies that execute external tools. Test if injected commands can access environment.
Exploit:
Scenario 4: Docker Container Environment Exposure
A Docker container is built with secrets in environment variables:
The attack vectors:
Image inspection: Anyone with access to the Docker image can inspect environment variables:
Running container access: If a container is compromised, all variables are readable:
Kubernetes secrets as environment: If Kubernetes secrets are mounted as environment variables:
Any process in the pod can read environment variables.
Finding it: Check Docker images for environment variable exposure. Inspect running containers. Test Kubernetes pod access. Review how secrets are mounted.
Exploit:
Scenario 5: .env File Committed to Version Control
Developers commit the .env file containing all environment variables to git:
The attack:
Repository is publicly accessible or breached
Attacker clones the repository
Entire .env file with all credentials is extracted
Attacker has production credentials for the entire infrastructure
Finding it: Search repositories for .env files. Check git history for environment variable leaks. Use tools like truffleHog to scan for credentials.
Exploit:
Scenario 6: Serverless Function Environment Exposure
AWS Lambda functions are configured with environment variables:
The attack:
Lambda function has a vulnerability (XPath injection, command injection, etc.)
Attacker exploits the function to execute code
Code reads and exfiltrates environment variables
AWS credentials and secrets are compromised
Additionally, Lambda execution roles are stored as environment variables and can be exploited to escalate privileges.
Finding it: Test Lambda functions for code injection. Check if environment variables can be exfiltrated. Review Lambda IAM roles and attached policies.
Mitigation Strategies
Never store secrets in environment variables Use a secrets management system instead:
AWS Secrets Manager
HashiCorp Vault
Azure Key Vault
Kubernetes Secrets
If environment variables must be used:
Encrypt sensitive values in environment:
Never log environment variables
Don't commit .env files to version control
Rotate credentials regularly Environment variables with static credentials are more dangerous the longer they exist.
Use least privilege for processes Run processes with minimal required permissions to limit damage from compromise.
Don't pass secrets to child processes
Audit environment variable access
Monitor what processes read environment variables
Log when secrets are accessed
Alert on suspicious access patterns
For cloud environments:
Use IAM roles instead of environment credentials (AWS)
Mount secrets as files instead of environment variables
Use service accounts with proper RBAC (Kubernetes)
# List environment variables of all processes
cat /proc/[PID]/environ | tr '\0' '\n'
# Or use ps to see environment of running processes
ps eww -p [PID]
# If you have code execution, capture environment
env | grep -i "key\|password\|secret\|token\|api"
# If you have access to Docker image:
docker inspect [image_id] --format='{{json .Config.Env}}' | jq
# If you have code execution in container:
env | grep -E "SECRET|KEY|PASSWORD|TOKEN"
# Search for .env files in repository
find . -name ".env*" -o -name "*.env*"
# Check git history for .env commits
git log --all --full-history -- ".env"
# Extract all secrets from commit history
git log -p | grep -E "PASSWORD|API_KEY|SECRET"
# Use tools to scan for credentials
truffleHog filesystem . --json
# Lambda function with environment variables
import os
import boto3
def lambda_handler(event, context):
db_password = os.environ.get('DB_PASSWORD')
api_key = os.environ.get('API_KEY')
stripe_secret = os.environ.get('STRIPE_SECRET')
# If function has vulnerability (XXE, injection, etc.)
# Attacker can extract environment
return {
'statusCode': 200,
'body': 'Processed'
}
from cryptography.fernet import Fernet
encrypted_password = Fernet(key).encrypt(b"password")
os.environ['DB_PASSWORD_ENCRYPTED'] = encrypted_password.decode()
# Only decrypt when needed
password = Fernet(key).decrypt(encrypted_password)
# Bad
logger.info(f"Config: {os.environ}")
# Good
logger.info("Application started")
logger.debug(f"Database host: {os.environ.get('DB_HOST')}") # Don't log password
# .gitignore
.env
.env.local
.env.*.local
# Bad - child process inherits all environment
subprocess.run(['tool', 'arg'], env=os.environ)
# Good - pass only needed variables
safe_env = os.environ.copy()
del safe_env['DB_PASSWORD']
del safe_env['API_KEY']
subprocess.run(['tool', 'arg'], env=safe_env)
api_key = os.environ.pop('API_KEY', None)
# Use api_key
del api_key # Explicit cleanup