
https://venturebeat.com/security/ai-agent-runtime-security-system-card-audit-comment-and-control-2026
Three AI coding agents leaked secrets through a single prompt injection. One vendor's system card predicted it
A security researcher, opened a GitHub pull request, typed a malicious instruction into the PR title, and watched Anthropic’s Claude Code Security Review action post its own API key as a comment. The same prompt injection worked on Google’s Gemini CLI Action and GitHub’s Copilot Agent (Microsoft). No external infrastructure required.
Е-е-е-е един плюс (и) на облако - вече за да го хаквате "No external infrastructure required" - хайде сега да почерпите
