
From DevSecOps to Agentic DevSecOps
DevSecOps was built on a simple premise: integrate security into the way humans build software. But "everyone" now includes AI agents. They write code and merge pull requests. Your security model is still designed for humans. Redesigning security for agents is what we call Agentic DevSecOps. It changes how we think about identity, access control, verification, and accountability. DevSecOps assumed humans in the loop The whole point of DevSecOps was to make everyone own security by shifting security left and building it into development, so vulnerabilities got caught before they reached production. All of that assumed a human developer writing the code, understanding the intent, reviewing scan results, and making judgement calls about risk. The developers can use tooling like static analysis to flag potential issues but ultimately a human evaluates whether the flag is a false positive or a threat. A reviewer reads the diff and considers the broader implications. Sonar's 2025 survey found 42% of production code already involves AI, and that number is climbing. Once agents start opening PRs and merging their own code, none of these assumptions hold. What makes Agentic DevSecOps different Agentic DevSecOps means redesigning security for a world where AI agents write and ship your code. Who is the agent, and what can it do? In traditional DevSecOps, access controls are tied to human identities. When an AI agent opens a PR, whose permissions does it use? What should it be allowed to do? In our experience, most organizations run agents under a developer's personal credentials, which means the agent inherits permissions that were calibrated for a human's judgement, not an AI's. AI agents also pick their own dependencies. Veracode's research found 45% of AI-generated code contains vulnerabilities. An agent can introduce a dependency that's technically clean but architecturally wrong, or generate code that mimics a vulnerable pattern without triggering signature-based detection. Speed breaks the verification model AI agents can generate and ship code ten to a hundred times faster than humans. A 15-minute security scan works fine when developers push a few times a day. When agents push dozens of changes per hour, that scan becomes either a bottleneck that defeats the purpose of using agents, or gets bypassed "temporarily". Agents leave plenty of trail in commit messages and PR descriptions. But by the time a broken change surfaces in production, the agent has pushed dozens more commits on top of it. Finding the offending commits can be a big challenge, even when using agents. AI agents fail differently than humans The threat model for AI-generated code is different. Humans make predictable mistakes: forgotten input validation, copy-pasted insecure patterns, hardcoded credentials, shortcuts under pressure. A reviewer can spot these. AI agents generate code that looks correct, passes basic checks, and is subtly wrong. The code reads well, it just doesn't do what you think it does. New attack vectors are already showing up in agentic workflows: Prompt injection through code context An attacker embeds malicious instructions in a codebase comment or issue description. The AI agent reads that context, follows the instructions, and introduces a backdoor that looks like a legitimate code change. Researchers have already demonstrated that LLMs can be manipulated through their input context. An AI agent asked to add a feature might pull in a dependency that doesn't exist yet. If an attacker registers that package name first, the agent helpfully installs the malicious package. Agents that modify lockfiles as part of their workflow bypass these protections. Zero human eyes An AI agent writes the code, another reviews it, an automated pipeline deploys it. Nobody planned for a fully automated path to production, but the steps chain together into one. AI agents need API keys and service credentials to do their work. An agent that logs its full context, or that includes secrets in a commit message or PR description, can expose credentials in places your secret scanning doesn't cover. The more autonomous the agent, the more credentials it touches. Improving security Give agents their own identity Create dedicated service identities for AI agents with scoped permissions. An agent that writes code shouldn't be able to merge it. An agent that runs tests shouldn't be able to modify the test configuration. In practice, we still find most agents running under a senior dev's personal token with full repo access. Treat them like any other service account: minimal permissions and audited access. Layer your verification A single security scan isn't enough. Stack static analysis, semantic analysis, behavioral testing, and anomaly detection on the diff patterns themselves. AI-generated code has detectable patterns, use that to improve verification. Slow agents down on purpose Put limits on how fast agents can push changes, and build circuit breakers that pause activity when anomalies appear: unusual dependency additions or changes to security-sensitive files. Track provenance The EU AI Act's transparency obligations already cover AI-generated code in regulated industries, and enforcement is coming. Every change should trace back to who (or what) wrote it, what prompted it, what context the agent had, and what review it received. Build the audit trail now. Enforce human review where it matters Not every change needs a human reviewer. But changes to authentication, authorization, payment processing, data handling, and infrastructure do. Define your high-risk zones and hold that line, even when it slows things down. The organizational shift Agentic DevSecOps is an organizational problem as much as a technical one. Security teams need to understand how agents fail. Dev teams should treat agents like a new hire: set guardrails, supervise the output. Platform infrastructure has to account for non-human participants in the pipeline. Organizations that get this right can deploy agents aggressively because they've built the controls to match. The alternative is bolting agents onto pipelines designed for humans and patching gaps after each incident.Lumia Labs helps organizations build secure engineering practices for AI-augmented teams. If you're deploying AI agents in your development pipeline and want to get security right, we'd like to hear from you.


