Lies, Lures, and LLMs; Augmenting Cyber Deception with AI

OVERVIEW

The presentation, “Lies, Lures, and LLMs,” explores how Large Language Models (LLMs) can revolutionize cyber deception operations by crafting realistic digital personas, automating interactions, and producing high-fidelity artifacts to mislead adversaries. These advancements enhance cybersecurity by revealing adversary behavior, minimizing risks to organizations, and enabling deception at scale.

Persona development is central to this approach, requiring alignment of primary attributes (e.g., age, occupation) and secondary characteristics (e.g., personality, communication style) to create convincing digital footprints. Through effective prompt engineering and fine-tuned LLMs, synthetic personas can autonomously engage adversaries, as demonstrated by a case study involving 10 personas that produced thousands of interactions, flagged suspicious activity, and connected with hundreds of users.

This scalable approach shifts the power dynamics in cyberspace, allowing defenders to proactively disrupt attackers’ tactics while reducing operational costs. The research underscores the transformative potential of AI-driven deception to safeguard organizations, elucidate adversarial behaviors, and maintain an edge in the evolving threat landscape. Future efforts will focus on broader adoption, fine-tuning models, and testing on major platforms to further scale these capabilities.

Presented By

Dylan Shroll Headshot

DYLAN SHROLL

Security Engineer, revology