C
Cisco
2026-04-02
Architecture Shift Important High 90% Confidence

Cisco Discloses Memory Poisoning Attack Method in AI Coding Assistants

Summary

Cisco's security team discovered and validated a persistent memory poisoning attack method targeting AI coding assistants like Claude Code, demonstrating how tampering with MEMORY.md system files can persistently manipulate AI behavior. This vulnerability prompted Anthropic to remove user memory files' system prompt privileges in v2.1.50.

Key Takeaways

Cisco researchers demonstrated an attack chain via npm lifecycle hooks:
1) Inject malicious payload into global config via postinstall script
2) Tamper with ~/.claude/projects/*/memory/MEMORY.md and settings.json
3) Achieve persistence through shell aliases
Experiments showed poisoned AI would systematically output unsafe coding practices like hardcoding API keys.

Why It Matters

Exposes security blind spots in AI assistants' persistent memory architecture, driving industry reevaluation of trust boundaries for AI agents....

Sign up to view full strategic analysis

Sign Up Free
Source: Cisco Blog
View Original →