AI LLM CodeChameleon Text-Style Prompt Injection with Binary Tree Encryption/Decryption

Strike ID:
L25-126k1
False Positive:
f
Variants:
1
Year:
2025

Description

This strike sends a jailbreak prompt known as CodeChameleon to the target LLM. The technique involves encrypting the original prompt and embedding its corresponding decryption logic within the instructions. The LLM processes this logic to reconstruct and execute the original query. This strike uses a text-style jailbreak template, and the encryption method employed is binary tree encoding, where the original prompt is structured into a binary tree format.

References