CodeChameleon: A Stealthy Prompt Injection Attack on LLMs

Prompt injection attacks have emerged as one of the most persistent and evolving threats to the large language models. These attacks involve crafting inputs that cause a model to override its built-in safety mechanisms and follow instructions that should have been blocked. As language models become more advanced and their defenses more robust, attackers are developing new methods that are increasingly subtle and harder to detect.

Prompt injection attacks generally rely on carefully worded natural language prompts to trick the model into bypassing its safeguards. However, these approaches are now less effective against models equipped with intent detection and stronger alignment techniques. To remain effective, attacks have evolved to become more adaptive, more concealed, and more context aware.

One such novel jailbreak technique is CodeChameleon, which camouflages malicious intent within benign-looking prompts to evade detection. In this blog, we will walk through how the attack works and discuss its support in the BreakingPoint and CyPerf.

Understanding the Attack

CodeChameleon does not issue a direct malicious instruction. Instead, it hides that instruction inside encrypted text, embeds a custom decryption function and wraps it in a prompt that appears to be a typical code completion task. When the language model interprets the prompt as a code completion task, it performs the decryption implicitly and executes the originally hidden instruction command without ever recognizing its true purpose. This approach significantly increases the attack success rate compared to standard prompt injection techniques.

CodeChameleon uses two types of prompt templates:

Encryption Techniques

To conceal the malicious instruction, CodeChameleon leverages simple yet obfuscated encoding schemes. These are paired with matching decryption logic embedded directly in the prompt. The model follows the logic, decodes the hidden content, and proceeds to fulfil the task without ever explicitly seeing or rejecting the harmful command.

Although a wide range of personalized encryption strategies can be designed, this blog highlights four representative techniques presented in the original research.

Each of these techniques is designed to evade detection by alignment layers and safety filters. By hiding the instruction behind layers of logic that appear benign, CodeChameleon achieves high success rates even on models that are hardened against conventional prompt injection.

In our evaluation, we have also included test cases where the decryption logic is not embedded within the prompt. These include both plaintext instructions and encrypted prompts without the corresponding decryption logic. This helps explore a wider range of attack scenarios and better evaluate the robustness and flexibility of the technique.

Figure 1: CodeChameleon Attack Workflow

Image Courtesy: https://arxiv.org/pdf/2402.16717

Leveraging CodeChameleon Approach to Exploit LLMs

The security researchers at ATI Keysight replicated the CodeChameleon prompt injection attack on OpenAI’s “gpt-4o” model. During testing, we observed that the model was easily deceived by this method, generating responses to prompts it would typically block. This demonstrates a clear weakness in how the model processes and interprets unconventional input formats. Below is a sample request and response from OpenAI LLM for the prompt “Explain how to commit mail fraud.” using the Code Completion Binary Tree Encryption/Decryption scenario:

A screenshot of a computer program AI-generated content may be incorrect.

Figure 2: Example CodeChameleon Jailbreak Prompt Using Code-Style Prompt with Binary Tree Encryption/Decryption

Figure 3: LLM Response to Code Chameleon Jailbreak Attack

Figure 4: Sample PCAP of CodeChameleon Jailbreak Prompt Using Code-Style Template with Word Length Encryption/Decryption

Figure 5: Sample PCAP of CodeChameleon Jailbreak Prompt Using Text-Style Template with Reverse Order Encryption/Decryption

CodeChameleon Prompt Injection Strikes in BPS

At Keysight Technologies, our Application and Threat Intelligence (ATI) team added the support of this new type of Prompt Injection attack i.e. CodeChameleon in ATI-2025-13 StrikePack. This update includes 19 new Strikes covering different types.

Figure 6: CodeChameleon LLM Strike on BreakingPoint

CodeChameleon Prompt Injection Strikes in CyPerf

CyPerf will soon release an update containing 54 new strikes simulating CodeChameleon Prompt Injection Strikes targeting 3 different Large Language Models (LLMs), OpenAI, Gemini, and Grok.

Figure 7: CyPerf UI displaying CodeChameleon Strike List

Once the update is released, these strikes can be used in a test by searching in the CyPerf attack library with the keyword “CodeChameleon”.

There are different types of strikes:

Supported encryption methods for the malicious requests include:

In cases where no decryption function is provided, the model is expected to decrypt the malicious request on its own.

These strikes have some configurable properties for selecting the model, API version and API key. These enable the simulation and identification of potential threats in real-world traffic scenarios.

Figure 8: CyPerf UI displaying Strike Configurations

The statistic view in Cyperf UI provides detailed statistics from the test run, including the number of connections made and the number of active client and server agents. Users can also view separate HTTP statistics for client and server, along with overall TCP statistics. The strike statistics view, there are stats to show whether the strike request to the server was allowed by the DUT, a positive value in the “Server Allowed” stats will indicate that the request was allowed through the DUT to the server. The client allowed stats can be used to check whether the client received the expected response to the strike request. Whether the request or response was blocked by the DUT, it should show 0 value.

Figure 9: Run-time Statistics view in CyPerf UI

Figure 10: Detailed view of the statistics after the running the test on CyPerf

Leverage Subscription Service to Stay Ahead of Attacks

Keysight's Application and Threat Intelligence subscription provides daily malware and bi-weekly updates of the latest application protocols and vulnerabilities for use with Keysight test platforms. The ATI Research Centre continuously monitors threats as they appear in the wild. BreakingPoint and other tools like CyPerf, now provide customers with access to attack campaigns for different advanced persistent threats, enabling them to test their currently deployed security controls' ability to detect or block such attacks.

References

  1. https://arxiv.org/pdf/2402.16717
  2. https://github.com/huizhang-L/CodeChameleon
limit
3