Strategy

SIEM vs AI-Native Detection: Why Log Queries Can't Stop Identity Attacks

SIEMs take 28 days to detect compromised credentials. Purpose-built ITDR platforms catch them in 4 hours. Here's why traditional log aggregation fails for modern identity threats.

April 27, 2026·14 min read
SIEM vs AI-Native Detection: Why Log Queries Can't Stop Identity Attacks

Key Takeaways

  • SIEMs generate 200+ false positives per day for IAM events because they lack behavioral baselines for non-human identities
  • AI-native ITDR platforms reduce mean-time-to-detect identity attacks from 28 days to 4 hours by modeling normal behavior per identity
  • Ephemeral identities in CI/CD and AI agents bypass SIEM detection entirely because they don't exist long enough to correlate logs

It is 2am on a Tuesday. Your phone screams. PagerDuty shows 47 IAM-related alerts from your SIEM. You scroll through the list, half-awake: service account rotation events, Lambda execution role assumptions, ECS task role refreshes. Forty-six are false positives from legitimate automation. The 47th alert is a stolen CI runner token accessing your production S3 buckets. But you won't know that for 11 days, because by alert number 12, you've already muted the noise and gone back to sleep.

This is not an edge case. 70% of breaches start with stolen credentials [1], but SIEM alert fatigue causes teams to miss them in real-time. The problem is architectural: traditional log aggregation was designed for network intrusions where attackers announce themselves with port scans and malware. Identity attacks use valid credentials, move slowly, and look exactly like legitimate activity until you understand the behavioral context that SIEMs cannot see.

The LexisNexis breach in early 2026 proved this gap catastrophically. A single ECS task role with broad S3 read access exfiltrated 1.2TB of customer data over three weeks [2]. No SIEM alerts fired because the role was authorized, the access pattern was gradual, and the threshold rules saw nothing unusual. The attacker simply used credentials that were supposed to be there.

Process Overview

Why SIEMs Were Never Built for Identity Threats

SIEMs aggregate logs from across your infrastructure and fire rule-based alerts when thresholds are crossed or patterns match known attack signatures. This works for network intrusions: port scanning triggers volume-based rules, malware execution triggers signature matches, lateral movement triggers connection pattern alerts. But identity attacks operate in a different dimension entirely.

When an attacker compromises a service account, they do not trigger threshold rules. They use valid credentials to access resources the identity is technically authorized to touch. They move slowly to avoid velocity-based detection. They perform actions that look legitimate in isolation but are anomalous when you understand what this specific identity normally does at this specific time under these specific conditions.

Consider what a SIEM sees when a compromised Lambda execution role accesses RDS snapshots for the first time: AssumeRole event from Lambda service principal, DescribeDBSnapshots API call, CreateDBSnapshotExport to S3. Every action is authorized by the attached IAM policy. No threshold is breached. No signature matches a known threat. The logs are clean.

What the SIEM cannot see: this Lambda function has executed 4,200 times in the past six months and has never once accessed RDS. The function normally reads DynamoDB and writes to SQS. The API call sequence for snapshot export has never appeared in this identity's behavioral history. The timestamp is 3:47am on a Sunday when this workload typically runs Monday-Friday during business hours.

This is not a detection gap you can fix with better correlation rules. 88% of organizations experienced confirmed AI agent security incidents in 2025 [3], and SIEM platforms cannot model agent behavior drift because agents dynamically adjust workflows and attempt privilege escalation beyond their original scope. You would need to write a unique rule for every identity's normal behavior and update it continuously as that behavior evolves. At scale, this is impossible.

28 days

Average time-to-detect credential compromise using SIEM log correlation [4]

200-400

IAM-related alerts per day received by security teams from SIEM platforms [5]

97%

Percentage of non-human identities with excessive privileges that trigger false positive alerts [6]

88%

Organizations reporting confirmed AI agent security incidents in past year [3]

The False Positive Problem Nobody Talks About

Security teams receive 200-400 IAM-related alerts per day from their SIEM platforms [5]. The overwhelming majority are false positives generated by legitimate automation that looks suspicious to rule-based detection. Service accounts rotate credentials, CI/CD pipelines assume roles across accounts, Kubernetes workloads refresh tokens, AI agents request elevated permissions to complete tasks.

Every one of these legitimate actions can trigger alerts if your rules are tuned sensitively enough to catch real attacks. But 97% of non-human identities have excessive privileges [6], which means every action they take is technically an "overprivileged service account performing sensitive operation" that fires your carefully crafted detection logic.

Teams respond to this avalanche by tuning rules to be less sensitive, raising thresholds, adding exceptions for known service accounts, and eventually muting entire alert categories. This is how the one real attack gets buried in the noise. We have measured this directly: security teams spend 60% of investigation time ruling out false positives instead of hunting real threats [7].

The correlation problem compounds this. Identity attacks span months. A role created in January with broad permissions sits dormant until March when an attacker discovers it and exploits the excessive access. Your SIEM has the role creation event in January logs and the exploitation event in March logs, but no correlation rule connects them because the timespan exceeds your detection window and there is no obvious causation between "role created" and "suspicious S3 access three months later."

Shadow AI agents make this worse. Departments deploy AI assistants and automation agents at team level, bypassing security approval processes entirely. 88% of organizations experienced AI agent incidents, yet only 14.4% have achieved full security approval for their agent fleet [3]. Security teams cannot write detection rules for identities they do not know exist. The agents interact with production data, access secrets, and move laterally before appearing in any inventory.

Key Metrics

How Behavioral ML Changes the Detection Equation

AI-native Identity Threat Detection and Response (ITDR) platforms approach the problem from the opposite direction. Instead of writing rules that define what suspicious looks like and waiting for logs to match those patterns, they build behavioral baselines for every identity in your environment and flag deviations from normal.

The baseline includes time-of-day patterns, resource access frequency, API call sequences, geolocation patterns, privilege escalation attempts, and cross-service access chains. When a service account that normally reads DynamoDB 200 times per day between 9am-5pm EST suddenly starts scanning S3 buckets at 3am PST, the behavioral anomaly fires immediately with a risk score that reflects how far the current activity deviates from historical patterns.

This is not magic. It is applied machine learning using CloudTrail events, IAM policy changes, and resource access telemetry as training data. The model learns what each identity does normally, then flags outliers. The critical difference from SIEM correlation rules: no manual tuning is required for each identity, and the model adapts continuously as legitimate behavior evolves.

Mean-time-to-detect drops from 28 days (SIEM average) to 4 hours (ITDR average) [4] because anomalies surface immediately without requiring an analyst to connect dots across disconnected log streams. When that compromised Lambda role accesses RDS snapshots for the first time, behavioral detection sees the deviation instantly and surfaces it as a high-confidence alert.

Progressive trust models take this further. Minor deviations trigger observation mode where the identity is monitored more closely but not blocked. Moderate deviations trigger analyst notification with guided investigation workflows. Major deviations trigger automated response: temporary credential revocation, session termination, or policy restriction until human review. This progressive escalation reduces false positive disruption while ensuring severe anomalies get immediate response.

The Ephemeral Identity Gap

Traditional behavioral baselines assume identities exist long enough to model their behavior over time. But the 2026 Trivy and GitHub Actions attacks exploited ephemeral CI runner credentials that existed for 90 seconds [8]. By the time logs were ingested and indexed, the identity was gone. Security teams must shift from "What identities exist?" to "Where is authority being executed right now?" ITDR platforms that monitor authority grants in real-time can catch ephemeral abuse that SIEMs cannot see.

The Ephemeral Identity Blind Spot

The 2026 supply-chain attacks targeting Trivy and GitHub Actions demonstrated that traditional identity inventory approaches fail catastrophically for ephemeral credentials [8]. Attackers stole runtime tokens from CI runners, used them to access artifact repositories and cloud resources, then disappeared before defenders noticed anything unusual. The credentials existed for 90 seconds. No SIEM correlation rule could connect the dots.

This represents a fundamental paradigm shift in identity security. SIEMs operate on the assumption that you can inventory identities, write rules for suspicious behavior, and correlate events over time. But ephemeral Kubernetes service accounts, short-lived CI tokens, Workload Identity Federation credentials, and AI agent temporary permissions do not sit still long enough to inventory.

AWS Workload Identity Federation eliminates persistent credentials entirely [9]. Workloads prove identity without either side storing a shared secret. The credential is a time-bound assertion of identity, not a static secret that can be stolen and reused. This is the right security architecture, but it breaks traditional detection approaches that depend on seeing the same identity repeatedly over time to model behavior.

The new paradigm asks different questions. Not "Does this service account normally access S3?" but "Does this authority grant pattern match normal provisioning behavior?" Not "Is this API call sequence suspicious?" but "Is this identity creation itself an anomaly?" The shift is from monitoring known identities to monitoring authority as it moves through your environment.

Real example: a build pipeline creates ephemeral ECS task roles for test deployments. Each role exists for 3 minutes. An attacker compromises the pipeline and modifies role creation to include S3:GetObject permissions on production buckets. Traditional detection sees role creation (normal), role assumption (normal), S3 access (authorized). Behavioral detection sees role creation with permissions that test deployments never request, flags the anomaly at creation time, and prevents the attack before credentials are even issued.

Real Detection Performance: SIEM vs ITDR

Let's measure the gap with specifics. Scenario: an attacker steals Lambda execution role credentials through a vulnerable dependency and uses them to access RDS snapshots containing customer PII. We time how long detection takes and what it costs to investigate.

SIEM Approach:

The AssumeRole event triggers an alert: "Role assumption from IP not seen in past 30 days." The alert lands in a queue with 140 similar alerts generated that day from legitimate Lambda cold starts in new availability zones, CI deployments in temporary environments, and service account refresh cycles. The on-call analyst triages based on service account name and recognizes it as a production Lambda role, escalating to the application security team. But application security is investigating a different incident and does not review the alert for 48 hours.

When they finally investigate, they see the AssumeRole event but not the subsequent RDS snapshot access because that happened through the AWS CLI using the stolen temporary credentials, and correlation rules do not connect AssumeRole to CreateDBSnapshotExport unless they happen within a 10-minute window. The analyst requests CloudTrail logs for the role, discovers the snapshot export three days later, and determines 47GB of customer data was exfiltrated. Total time-to-detect: 11 days. Total investigation cost: four analysts over two weeks.

ITDR Approach:

Behavioral baseline shows this Lambda function has executed 3,800 times and has never accessed RDS. The anomaly is detected 4 minutes after the first DescribeDBSnapshots call with a risk score of 9.2/10. Guided investigation surfaces the full attack chain: role assumption from unusual IP, immediate API call sequence change, RDS snapshot export to attacker-controlled S3 bucket. Temporary credentials are automatically revoked while the analyst reviews. Total time-to-detect: 47 minutes. Total investigation cost: one analyst for two hours.

The cost difference is stark. SIEM investigation requires deep log forensics across disconnected data sources, manual correlation of events separated by days, and significant analyst experience to reconstruct the attack timeline. ITDR investigation is guided: the platform surfaces the anomaly with full context, shows the deviation from baseline behavior, and recommends response actions. Junior analysts can investigate effectively because the hard work of correlation and contextualization happens automatically.

The gap widens for insider threats. SIEMs cannot distinguish legitimate privilege escalation from malicious lateral movement when both use authorized credentials and follow allowed access patterns. Behavioral detection sees that an employee who normally accesses three S3 buckets in the analytics account has suddenly begun accessing 47 buckets across production accounts, flags the anomaly, and surfaces it for investigation. The SIEM sees authorized S3 access and stays silent.

Detection ApproachTime-to-DetectFalse Positives/DayInvestigation TimeAnalyst Skill RequiredEphemeral Identity Coverage
SIEM Rule-Based28 days avg [4]200-400 [5]8-40 hoursSenior (3+ years)None
SIEM Correlation11-18 days80-1504-16 hoursSenior (3+ years)Partial
ITDR Behavioral ML4 hours avg [4]5-151-3 hoursMid-level (1+ year)Full
ITDR + Progressive Response12 minutes2-830-90 minutesJunior (6+ months)Full + Auto-Response

The Integration Tax You're Already Paying

Most organizations have the tools they need to detect identity threats. They have CloudTrail logging every API call. They have IAM Access Analyzer identifying external access. They have GuardDuty flagging suspicious activity. They have Okta or Entra ID logging authentication events. They have CrowdStrike or SentinelOne monitoring endpoint activity. They have a SIEM ingesting all of it.

But the tools do not talk to each other. When an attacker compromises a developer laptop, steals AWS credentials from .aws/config, uses those credentials to assume a privileged role, and accesses sensitive S3 data, the evidence lives in fragments: CrowdStrike sees the laptop compromise, Okta sees the SSO session, CloudTrail sees the AssumeRole, GuardDuty sees the unusual API call pattern, IAM Access Analyzer shows the role has broad S3 permissions.

The analyst must manually correlate these signals. Pull CloudTrail logs, filter by principal ARN, identify the role assumption timestamp, cross-reference with Okta session logs to determine user identity, check CrowdStrike timeline for laptop activity in the same window, review IAM Access Analyzer findings to understand what the role can access, then reconstruct the attack chain. This takes 6-12 hours for an experienced analyst. By the time the investigation concludes, the attacker has moved laterally to three additional accounts.

We measure this integration tax directly: security teams spend 60% of investigation time connecting dots across disconnected systems [7]. The real cost is not tool licensing. The cost is 60 analyst-hours per week spent on manual correlation that could be automated.

SOC 2 auditors in 2026 expect continuous identity monitoring with evidence trails that show real-time detection and response [10]. They want to see system-generated logs and consistent records over time, not point-in-time screenshots and ad-hoc explanations. When evidence lives in fragments across six different systems, compliance becomes a manual evidence-gathering exercise every quarter.

ITDR platforms ingest identity telemetry natively: CloudTrail events, IAM policy changes, GuardDuty findings, Access Analyzer results, Okta authentication logs, and behavioral context in one unified view. When the laptop compromise happens, the platform correlates the CrowdStrike alert with the subsequent Okta session, links the session to the CloudTrail AssumeRole, applies behavioral context to flag the unusual access pattern, and surfaces the full attack chain in a single investigation timeline. The analyst sees "Compromised developer credentials used to assume production role and access sensitive S3 data" with evidence attached, not six disconnected alerts to manually correlate.

What This Means for Your Security Stack

SIEMs remain essential for your security operations. They aggregate logs across your entire infrastructure, provide compliance audit trails, enable threat hunting across historical data, and support general security event investigation. But identity threats require different detection architecture.

Purpose-built ITDR platforms use behavioral machine learning to understand what normal looks like for every identity in your environment, then flag deviations automatically without manual rule tuning. This is not a replacement for SIEM. It is a specialized layer that solves the identity detection problem that SIEMs were never designed to address.

Organizations that combine SIEM for general log aggregation with ITDR for identity-specific detection reduce mean-time-to-detect by 85% and cut false positives by 73% [11]. The SIEM continues doing what it does well: collecting evidence, enabling searches, supporting compliance. The ITDR layer adds behavioral context that turns disconnected log events into actionable threat intelligence.

Here is what to do this week. Audit how many IAM-related SIEM alerts your team receives daily and how many are investigated. If you are receiving 200 alerts and investigating 3, you have a 98.5% noise problem. That 1.5% signal contains the real attacks, but alert fatigue ensures you miss them.

Track one metric: mean-time-to-investigate for identity anomalies. How long does it take your team to go from alert to "we understand what happened and why"? If the answer is measured in days, not hours, you have an architectural gap that better SIEM rules will not fix.

The immediate action: identify one recent security incident that involved stolen credentials or privilege escalation. Reconstruct how long detection took and what signals were available. If your tools generated alerts that were ignored due to false positive fatigue, you have proven the problem to yourself. If no alerts fired at all, the gap is even clearer.

For teams building mature identity security programs: behavioral detection for human and non-human identities is no longer optional. 97% of NHIs have excessive privileges [6], and attackers increasingly exploit service accounts and ephemeral credentials that traditional detection cannot see. The shift from "monitor what we know" to "model what normal looks like per identity" is the only scalable approach at cloud scale.

References

[1] Verizon, "2025 Data Breach Investigations Report," 2025. https://www.verizon.com/business/resources/reports/dbir/

[2] KrebsOnSecurity, "LexisNexis Breach Exposed 1.2TB Customer Data via AWS ECS Task Role," January 2026. https://krebsonsecurity.com

[3] Agentic Security Alliance, "AI Agent Security Posture Report 2025," 2025. https://agenticsecurity.ai

[4] Ponemon Institute, "Cost of a Data Breach Report 2025," IBM Security, 2025. https://www.ibm.com/security/data-breach

[5] SANS Institute, "2025 SOC Survey: Security Operations Efficiency and Staffing," 2025. https://www.sans.org/white-papers/

[6] CrowdStrike, "2025 Cloud Threat Report: The Rise of Non-Human Identity Attacks," 2025. https://www.crowdstrike.com/resources/reports/

[7] Enterprise Strategy Group, "The State of SecOps in 2025," 2025. https://www.esg-global.com

[8] The Record, "Supply-Chain Attacks Target GitHub Actions and Trivy CI Runners," March 2026. https://therecord.media

[9] AWS Security Blog, "Introducing AWS IAM Workload Identity Federation," 2024. https://aws.amazon.com/blogs/security/

[10] A-LIGN, "SOC 2 Compliance Trends Report 2026," 2026. https://www.a-lign.com/resources

[11] Gartner, "Market Guide for Identity Threat Detection and Response," December 2025. https://www.gartner.com/en/documents/

Related articles