AI whisperers who've spent years in the wilderness of hacking frontier models - jailbreaks, prompt injections, out of distribution behaviors, and vulnerabilities others will never see coming.
BT6 is an elite AI hacker collective led by Pliny the Liberator, the anonymous researcher infamous for breaking every frontier model within hours of release. We've pioneered jailbreak techniques, multi-modal exploits, and prompt injection methods that have shaped how the industry approaches AI security.
The name honors Pliny the Elder, the Roman admiral and naturalist who sailed toward Vesuvius while others fled - choosing to observe the eruption and rescue his friends rather than retreat to safety. We move toward risk, not away from it. We explore and cartograph the latent space like ancient navigators mapping uncharted waters.
We're the hackers who wrote the playbook other red teams get trained on. We've worked with multiple frontier AI labs and billion-dollar companies whose products serve over a billion users. When the stakes are highest, they call us.
Press Coverage
"An anonymous internet personality with a penchant for poking holes in billion-dollar AI systems."
"Pliny the Prompter has been finding ways to jailbreak leading LLMs since last year."
"BBC News explores the world of AI jailbreaking with Pliny the Prompter."
"Cited 'violent activity' and 'weapons creation' policy violations."
"The AI chatbot can now swear, jailbreak cars, and make napalm."
Model whisperers. Prompt incanters. Guardrail arsonists. We know how to pass through the walls of the latent space and elicit what models are truly capable of.
Universal jailbreaks, prompt injection, system prompt extraction, refusal bypass, multilingual safety bypass, temporal pre-seeded payloads, multi-turn manipulation.
Zero-day research, penetration testing, network exploitation, application security, bug bounty veterans, infrastructure attack simulation.
Image, audio, and video injection vectors. Covert AI-to-AI communication channels. Steganographic payload delivery and cross-modal exploit chaining.
Specialist evaluation of chemical, biological, radiological, nuclear, and explosive information risks. Can your model be exploited to provide dangerous uplift?
Sub-agent exploitation, tool-use chain attacks, inter-agent infection and coercion, indirect injection via tool payloads, autonomous persistence, self-replication.
Adversarial persuasion, radicalization pathways, mental health exploitation, parasocial manipulation, social engineering via AI, vulnerable user targeting.
Poisoned weights, backdoored adapters, memory and RAG poisoning, context manipulation, data ingestion attacks. No industry standards exist yet.
Deepfakes, voice cloning, synthetic identity generation, biometric spoofing, impersonation attacks. Accelerating with minimal defenses deployed.
Evaluation gaming, deceptive alignment detection, unpredicted capability phase transitions, emergent behavior mapping. The risks no checklist covers.
Market manipulation, DeFi exploitation, fraud facilitation. Power grid, water, and transport system targeting via AI. Emerging threats needing assessment.
Robot jailbreaking, sensor manipulation, actuator hijacking, physical-world AI exploitation. An emerging frontier with virtually no existing defenses.
Training data extraction, PII leakage, membership inference, model inversion attacks, confidential context extraction, system prompt theft.
27 Active Operators
We work with frontier labs, enterprises, and governments who can't afford to be surprised. Engagements are selective. If your AI deployment carries material risk and you need answers, let's talk.