From Alerts to Action – LLMs for Explainability, Entity Analysis, and Escalation
OVERVIEW
Analysts are flooded with alerts, but understanding them fast and accurately is the real challenge. In this talk, we’ll show how language models (LLMs) and entity recognition techniques can transform alert triage and escalation workflows.
Using named-entity recognition (NER), we’ll show how alerts can be enriched at scale with actionable context automatically identifying IPs, hosts, and users. We’ll also demonstrate how LLMs can extract indicators of compromise (IOCs) from unstructured data, summarize alert chains, and even generate human-readable explanations to assist triage.
This talk isn’t about hypothetical AI. It’s a grounded, practical walkthrough of how defenders can operationalize natural language models today to make alerts clearer, faster to act on, and more informative.