
AI Might Be Making Your Team Worse
We've started to see reduced learning in AI-assisted development. A developer finishes a feature in half the usual time. The code works, the PR gets merged, everyone's happy. Except that same developer can't debug the same code without AI help. They shipped the code, but they didn't learn anything. They're fully relying on AI to help them. This week we came across research that puts data behind what we've been observing. Anthropic published a study called "How AI Impacts Skill Formation" that provides evidence for something engineering leaders have been quietly worrying about: AI coding tools can impair programming skill development. Anthropic's research Researchers ran a controlled experiment with developers learning Python Trio, an asynchronous programming library. They chose Trio specifically because it requires understanding new concepts like structured concurrency, not just Python syntax. Half the participants had access to AI assistance, half didn't. The results were pretty interesting: the AI group scored 17% lower on knowledge assessments, a 4.15-point gap on a 27-point quiz with a substantial effect size (Cohen's d = 0.738). Debugging questions showed the largest difference between groups, the skill that matters most when something breaks in production. The AI group encountered far fewer errors during the learning process. The median AI-assisted participant hit 1 error compared to 3 for the control group. On the surface that sounds like a benefit, but errors are how developers learn. RuntimeWarnings, TypeErrors, the frustration of debugging: these moments force you to understand how code works. The AI removed the struggle, and with it, the learning. AI usage patterns One of the more useful parts of the research was identifying six distinct patterns in how developers use AI, with dramatically different learning outcomes. Three patterns correlated with poor learning (quiz scores between 24-39%):AI Delegation: Handing everything to AI for code generation Progressive AI Reliance: Starting by asking questions but gradually delegating all the actual coding Iterative AI Debugging: Using AI to fix bugs without trying to understand why they happenedThree patterns preserved learning even with AI assistance (quiz scores between 65-86%):Generation-Then-Comprehension: Generating code but then asking follow-up questions to understand it Hybrid Code-Explanation: Requesting both code and explanations together Conceptual Inquiry: Only asking conceptual questions, then writing the code yourselfWhat is interesting to us is that developers learn when they are mentally engaged with the problem. The high-scoring patterns all have something in common: the developer kept thinking. They used AI to help them understand rather than to avoid understanding. Implications The first is what happens to your senior engineer pipeline. Junior developers traditionally build expertise through struggle: debugging, making mistakes, developing intuition for why things fail. If AI shortcuts this process for an entire generation of engineers, organizations may find themselves with fewer people capable of growing into senior technical roles. Another effect is the lack of in-depth knowledge about frameworks and programming languages that developers work with. When AI becomes the default way to learn new technologies, teams can accumulate technical dependencies without building the deep understanding needed to maintain and evolve those systems. We've seen this already with teams that adopted frameworks quickly using AI assistance but now struggle to debug issues or make architectural changes because nobody truly learned the underlying technology. This matters especially in safety-critical domains. Security, infrastructure, financial systems all require people who can review code, not just accept what AI generates. You can't effectively review code for a library you've never really learned yourself. The human oversight that makes AI-assisted development safe depends on humans who understand what they're overseeing. We've written before about research showing experienced developers were 19% slower when using AI on real-world tasks. When you add reduced debugging ability to that picture, the long-term productivity costs start looking more significant than the short-term speed gains. What we think organizations should do The research isn't an argument against AI coding tools. We use them ourselves (a lot!). It's an argument for being intentional about how they're used, especially when learning is part of the goal. Make understanding part of code review. Ask developers to explain how their code works. The high-performing AI interaction patterns in the study all involved seeking explanations, and code review can reinforce the same habit. Recognize when learning mode is different from production mode. There's a real difference between using AI to ship a feature in a technology you know well versus using AI to learn something new. Organizations that acknowledge this distinction can adjust expectations accordingly. When someone is learning, slower is often better. Keep some productive struggle in the process. When developers are learning new technologies, consider limiting AI assistance or focusing it on explanation rather than code generation. Working through problems yourself is slower, but you keep what you learn. Watch for signs of dependency. Developers who can't explain code they wrote, who struggle to debug without AI assistance, or who seem stuck on technologies they've supposedly been using for months. These are early signals that skill formation isn't happening. Invest in real understanding, even when it's slower. Code that nobody truly understands is technical debt, even if it works. Making time for developers to build expertise in critical systems pays off when those systems need to evolve or when something goes wrong. AI-enhanced productivity is not a shortcut to competence AI coding tools offer a bargain: faster output today in exchange for potentially reduced capability tomorrow. For experienced developers working in familiar domains, that trade-off might work out fine. For developers learning new technologies, or for teams building systems that will require deep expertise to maintain, the cost may be higher than the benefit. The researchers put it directly: "AI-enhanced productivity is not a shortcut to competence." We think that's the right framing. Organizations that treat AI tools as a shortcut to competence may eventually find themselves with teams that can generate code but struggle to understand it, exactly when understanding matters most. AI is here to stay. Let's use it to make engineering teams stronger over time.We help organizations build sustainable engineering practices. If you're thinking through AI adoption and want to talk about maintaining team capability while capturing productivity gains, we'd like to hear from you.