[Speakers]
Adversary Village at
DEF CON 33

Matthew Canham

Executive Director, Cognitive Security Institute

Dr. Matthew Canham is the Executive Director of the Cognitive Security Institute and a former Supervisory Special Agent with the Federal Bureau of Investigation (FBI), he has a combined twenty-one years of experience in conducting research in cognitive security and human-technology integration. He currently holds an affiliated faculty appointment with George Mason University, where his research focuses on the cognitive factors in synthetic media social engineering and online influence campaigns. He was previously a research professor with the University of Central Florida, School of Modeling, Simulation, and Training’s Behavioral Cybersecurity program.

His work has been funded by NIST (National Institute of Standards and Technology), DARPA (Defense Advanced Research Projects Agency), and the US Army Research Institute. He has provided cognitive security awareness training to the NASA Kennedy Space Center, DARPA, MIT, US Army DevCom, the NATO Cognitive Warfare Working Group, the Voting and Misinformation Villages at DefCon, and the Black Hat USA security conference. He holds a PhD in Cognition, Perception, and Cognitive Neuroscience from the University of California, Santa Barbara, and SANS certifications in mobile device analysis (GMOB), security auditing of wireless networks (GAWN), digital forensic examination (GCFE), and GIAC Security Essentials (GSEC).

Hands-on workshop : Using Evil Digital Twins for Fun and Profit

Saturday | Aug 9th 2025
Adversary Village workshop stage | Las Vegas Convention Center

Purple Team

Twenty‑four months ago we presented the Black Hat talk "Evil Digital Twin" in which we demonstrated how large language models (LLMs) could readily exploit the cognitive vulnerabilities of users, and that humans would perceive AI as sentient long before true artificial general intelligence emerge.
Join us for this two‑hour workshop as we walk you through the basic architecture of human digital twins (HDTs), trained on the core patterns of human individuals, may be deployed to simulate both the targets of social engineering attacks or operate as high-fidelity honey pots.
We also explore a coming future of persistent cognitive cyber‑warfare, escalating as the cost of deception approaches zero and the attack surface shifts from networks to minds. Audience members will interact with SCOTOBOT (a human digital twin of a Supreme Court Justice), meet a perfect AI assistant for insider threat, and leave with a NIST research‑based LLM that speaks in phishing emails.

Detailed workshop outline :

  • Introduction to LLMs and HDTs - Talk by Ben D. Sawyer, Matthew Canham (45 min)
    • Basic architecture of human digital twins (HDTs) (Ben)
    • What are LLMs being used for, adversarially, today? (Matthew)
    • HDTs applications in cyber deception (Ben)
    • Creating high-fidelity honey pots (Matthew)
  • EXERCISE: SCOTOBOT (a human digital twin of a Supreme Court Justice) (15 min)
    • Sharing and discussion (15 min)
  • The Future of Persistent cognitive cyber‑warfare (45 Min)
    • Meet your AI assistant for insider threats (Ben)
  • EXERCISE: Attack simulation on 3 C-suite targets to collect whaling info (Ben)
  • EXERCISE: NIST research‑based LLM to generate targeted phishing emails (Matthew)

Access Everywhere.


Join Adversary Village Discord Server.

Join Adversary Village official Discord server to connect with our amazing community of adversary simulation experts and offensive security researchers!