AI Coding Assistant Threat Intelligence Feed

AI coding assistants accelerate development, but they also expand the attack surface. From prompt injection exploits to malicious MCP servers and package-level compromise, new threats are evolving inside the IDE. This feed curates real-world incidents, simulated breaches, and actionable guidance to help your engineering and security teams detect, understand, and mitigate risks before they impact production.

Hidden-in-Plain-Text

Summary
This injection attack allows attackers to compromise AI coding assistants. The vulnerability enables unauthorized access to sensitive data and potential manipulation of AI-generated code, posing significant risks to development workflows.

Over…

read more

Multi-Agent Taint Specification Extraction

Summary
This security vulnerability allows attackers to compromise AI coding assistants affecting Codeium, Cursor, GitHub Copilot. The vulnerability enables unauthorized access to sensitive data and potential manipulation of AI-generated code, posing s…

read more

AJAR

Summary
This injection attack allows attackers to compromise AI coding assistants. The vulnerability enables unauthorized access to sensitive data and potential manipulation of AI-generated code, posing significant risks to development workflows.

Over…

read more

SD-RAG

Summary
This injection attack allows attackers to compromise AI coding assistants. The vulnerability enables unauthorized access to sensitive data and potential manipulation of AI-generated code, posing significant risks to development workflows.

Over…

read more

Understanding Help Seeking Digital

Summary
This security vulnerability allows attackers to compromise AI coding assistants affecting Cursor. The vulnerability enables unauthorized access to sensitive data and potential manipulation of AI-generated code, posing significant risks to devel…

read more

Google Chrome tests Gemini-powered

Summary
This security vulnerability allows attackers to compromise AI coding assistants affecting GitHub Copilot. The vulnerability enables unauthorized access to sensitive data and potential manipulation of AI-generated code, posing significant risks …

read more

ReasAlign: Reasoning Enhanced Safety

Summary
This injection attack allows attackers to compromise AI coding assistants affecting JetBrains AI Assistant. The vulnerability enables unauthorized access to sensitive data and potential manipulation of AI-generated code, posing significant risk…

read more

ReasAlign: Reasoning Enhanced Safety

Summary
This injection attack allows attackers to compromise AI coding assistants affecting JetBrains AI Assistant. The vulnerability enables unauthorized access to sensitive data and potential manipulation of AI-generated code, posing significant risk…

read more

Reasoning Hijacking: Subverting LLM

Summary
This security bypass allows attackers to compromise AI coding assistants. The vulnerability enables unauthorized access to sensitive data and potential manipulation of AI-generated code, posing significant risks to development workflows.

Overv…

read more

Agent Skills Wild: Empirical

Summary
This injection attack allows attackers to compromise AI coding assistants. The vulnerability enables unauthorized access to sensitive data and potential manipulation of AI-generated code, posing significant risks to development workflows.

Over…

read more

Microsoft Copilot Studio extension

Summary
This security vulnerability allows attackers to compromise AI coding assistants affecting GitHub Copilot. The vulnerability enables unauthorized access to sensitive data and potential manipulation of AI-generated code, posing significant risks …

read more

New ‘Reprompt’ Attack Silently

Summary
This security bypass allows attackers to compromise AI coding assistants affecting GitHub Copilot. The vulnerability enables unauthorized access to sensitive data and potential manipulation of AI-generated code, posing significant risks to deve…

read more

Reprompt: Single-Click Microsoft Copilot

Summary
This security vulnerability allows attackers to compromise AI coding assistants affecting GitHub Copilot. The vulnerability enables unauthorized access to sensitive data and potential manipulation of AI-generated code, posing significant risks …

read more

SpatialJB: How Text Distribution

Summary
This jailbreaking technique allows attackers to compromise AI coding assistants. The vulnerability enables unauthorized access to sensitive data and potential manipulation of AI-generated code, posing significant risks to development workflows….

read more

Integrating APK Image Text

Summary
This security vulnerability allows attackers to compromise AI coding assistants. The vulnerability enables unauthorized access to sensitive data and potential manipulation of AI-generated code, posing significant risks to development workflows….

read more

Decompilation-Driven Framework Malware

Summary
This security vulnerability allows attackers to compromise AI coding assistants. The vulnerability enables unauthorized access to sensitive data and potential manipulation of AI-generated code, posing significant risks to development workflows….

read more

KryptoPilot

Summary
This security vulnerability allows attackers to compromise AI coding assistants. The vulnerability enables unauthorized access to sensitive data and potential manipulation of AI-generated code, posing significant risks to development workflows….

read more

Promptware Kill Chain: How

Summary
This injection attack allows attackers to compromise AI coding assistants. The vulnerability enables unauthorized access to sensitive data and potential manipulation of AI-generated code, posing significant risks to development workflows.

Over…

read more

Reprompt attack hijacked Microsoft

Summary
This security vulnerability allows attackers to compromise AI coding assistants affecting GitHub Copilot. The vulnerability enables unauthorized access to sensitive data and potential manipulation of AI-generated code, posing significant risks …

read more

FinVault: Benchmarking Financial Agent

Summary
This injection attack allows attackers to compromise AI coding assistants. The vulnerability enables unauthorized access to sensitive data and potential manipulation of AI-generated code, posing significant risks to development workflows.

Over…

read more

Small Symbols, Big Risks:

Summary
This security vulnerability allows attackers to compromise AI coding assistants. The vulnerability enables unauthorized access to sensitive data and potential manipulation of AI-generated code, posing significant risks to development workflows….

read more

Reducing Cloud Chaos: Rapid7

Summary
This injection attack allows attackers to compromise AI coding assistants. The vulnerability enables unauthorized access to sensitive data and potential manipulation of AI-generated code, posing significant risks to development workflows.

Over…

read more

Lack isolation agentic browsers

Summary
This injection attack allows attackers to compromise AI coding assistants affecting JetBrains AI Assistant. The vulnerability enables unauthorized access to sensitive data and potential manipulation of AI-generated code, posing significant risk…

read more

AutoVulnPHP: LLM-Powered Two-Stage PHP

Summary
This security vulnerability allows attackers to compromise AI coding assistants. The vulnerability enables unauthorized access to sensitive data and potential manipulation of AI-generated code, posing significant risks to development workflows….

read more

AutoVulnPHP: LLM-Powered Two-Stage PHP

Summary
This security vulnerability allows attackers to compromise AI coding assistants. The vulnerability enables unauthorized access to sensitive data and potential manipulation of AI-generated code, posing significant risks to development workflows….

read more

Cyber Threat Detection Vulnerability

Summary
This security vulnerability allows attackers to compromise AI coding assistants. The vulnerability enables unauthorized access to sensitive data and potential manipulation of AI-generated code, posing significant risks to development workflows….

read more

Automated Generation Accurate Privacy

Summary
This security vulnerability allows attackers to compromise AI coding assistants. The vulnerability enables unauthorized access to sensitive data and potential manipulation of AI-generated code, posing significant risks to development workflows….

read more

Beyond BeautifulSoup

Summary
This code execution vulnerability allows attackers to compromise AI coding assistants. The vulnerability enables unauthorized access to sensitive data and potential manipulation of AI-generated code, posing significant risks to development work…

read more

Lightweight Yet Secure: Secure

Summary
This security vulnerability allows attackers to compromise AI coding assistants affecting Codeium, Cursor, GitHub Copilot. The vulnerability enables unauthorized access to sensitive data and potential manipulation of AI-generated code, posing s…

read more

Game-theoretic feedback loops LLM-based

Summary
This security vulnerability allows attackers to compromise AI coding assistants affecting JetBrains AI Assistant. The vulnerability enables unauthorized access to sensitive data and potential manipulation of AI-generated code, posing significan…

read more