SPICS Lab

AI for Security

AI-assisted Security Analysis

  AI is changing the way security problems are discovered, analyzed, and monitored. Modern AI systems can help security analysts reason about source code, binaries, patches, logs, vulnerability reports, execution traces, and proof-of-concept exploits. Recent frontier cybersecurity efforts, such as Anthropic’s Claude Mythos Preview and Project Glasswing and its related technical blog post, suggest that AI is becoming an active participant in vulnerability discovery and security analysis, not merely a tool for summarizing documents. Related industry discussions, such as Forrester’s analysis of Project Glasswing, also highlight how AI-driven vulnerability discovery may reshape future security workflows.

  This shift creates two closely related research questions. First, can AI help us find security problems in conventional computing systems, such as vulnerable code, suspicious logs, abnormal network behavior, misconfigured services, or unsafe system workflows? Second, as AI becomes part of the system itself, can we detect when AI-driven systems behave incorrectly, inefficiently, or unsafely? Examples include prompt injection, jailbreak attempts, poisoned retrieval results, unsafe tool calls, abnormal token usage, unintended data access, and suspicious agent-to-agent communication.

  Our lab studies AI for Security as a way to diagnose security problems in both conventional computing systems and AI-driven systems. Rather than focusing only on traditional machine-learning-based detection, we are interested in AI systems that can reason across heterogeneous security evidence, interact with tools, monitor AI-agent workflows, and support human analysts in high-stakes security tasks. In short, we study how AI can help find when systems go wrong — and when AI itself starts going wrong.

Core Research Themes

  Our lab explores AI for Security through the following research directions:


Key Sub-Topics & Keywords

To give you an idea of potential topics you may be interested in, our research includes, but is not limited to:

  1. LLM-assisted vulnerability discovery and triage
  2. AI-based detection of prompt injection and jailbreak attempts
  3. Monitoring AI-agent tool use, memory access, and workflow behavior
  4. Detection of abnormal token usage and suspicious AI-system behavior
  5. Security analysis under poisoned or misleading knowledge
  6. AI-assisted incident analysis and security report generation
  7. Red-team and blue-team evaluation of AI security agents

Student Note: If you are interested in both AI and cybersecurity, this field may be a good fit for you. You will study how modern AI can help find vulnerabilities, analyze complex security evidence, and support security analysts. At the same time, you will learn how to detect when AI-driven systems themselves behave abnormally, insecurely, or inefficiently — and why such systems must be carefully evaluated, monitored, and controlled before they can be trusted in real-world security workflows.

Next post
Next-Generation Cryptosystems