A Junior Dev Got Fired for AI Code. His Manager Used AI to Review It.
A junior engineer was fired after Cursor-generated code broke production. His manager reviewed it with AI. When nobody reads the code, who owns the failure?
A Junior Dev Got Fired for AI Code. His Manager Used AI to Review It.
A junior engineer at an Indian fintech startup was fired last week after Cursor-generated code caused a production failure. His manager had reviewed the same code using AI. Neither of them actually read it. And the question it surfaces has no clean answer yet: when AI writes the code and AI reviews it, who owns what breaks?
There's no tidy accountability framework for this. Not inside most companies, not in employment law, not in any clear doctrine of AI code liability. What we have instead is the outcome that always shows up when rules don't exist: the person with the least power takes the fall.
Table of Contents
- What Actually Happened
- Who Actually Owns the Failure Here?
- Amazon Did This Too, and Its Response Was Worse
- The Legal Situation Is Not Reassuring
- What Accountability Looks Like Without Regulation
- Who Is Accountable When AI Writes All the Code?
- FAQ
What Actually Happened
The story came from a Reddit post on r/DevelopersIndia by a software engineer who watched his colleague get fired. The colleague was a fresh graduate at an AI-focused fintech startup. Competent, by the poster's account. He "knew Python well, had practised data structures and algorithms, and had built a solid university project." Not underqualified. Just new.
The company was, in the original poster's words, "delusional" about AI. Managers pushed Cursor usage as a performance signal. The poster wrote that he'd had 1:1 meetings "entirely about how my cursor usage was the lowest in the company even though I have never missed a deadline." That's the environment. Not just AI-friendly. AI-mandatory.
The junior engineer held up fine at first. Deadlines were generous, he wrote code by hand, things moved along. Then the deadlines got tighter. To keep up, he started leaning on AI tools.
This is where the gap opened up. When he wrote his own code, he understood it line by line. When something broke, he could find the line. But with AI-generated code, he started understanding it in "chunks," knowing broadly what a function did without knowing how. The original poster put it plainly: "AI does not care if the file is fifteen thousand lines wrong. But after a point, it becomes a headache even looking at those files. So, this promotes fixing further bugs via AI."
One night, a Slack message came in at 11pm. Something in production had broken.
The team spent the next day tracing it. The bug was in the junior engineer's code. Specifically in some changes he'd generated with Cursor. His manager had reviewed the pull request, but the review itself was done with AI. As the post describes it: "he had generated some changes via Cursor before, and the manager merged it by reviewing it with AI." Nobody actually read the code. The junior engineer was promptly fired. It was his second production bug.
The Reddit thread that followed largely blamed the company, not the developer. One comment read: "Firing over two bugs is the stupidest thing ever. A useless and toxic company." Another: "When reviewing the code, he didn't bother checking the code quality and simply used AI to review and merge it. The onus of the bug is on them."
Who Actually Owns the Failure Here?
Let's map the accountability chain. The junior engineer wrote code using an AI tool his company mandated and measured him on. His manager reviewed that code using an AI tool. The code merged. Production broke. The junior engineer got fired.
At no point in that chain did any human actually read the code. Not the developer who generated it. Not the manager who approved it. The review was a formality performed by a second AI system on the output of a first AI system.
In traditional software engineering, code review works because the reviewer is accountable for what they approve. If you merge a pull request without reading it, you own the consequences of that merge. That's not written down in most places, but it's understood. The manager's signature is on the merge commit.
But if the manager reviewed with AI and rubber-stamped the result, what did they actually do? They delegated their judgment. And when the judgment turned out to be wrong, the company chose to fire the person who generated the code, not the person who approved it.
This isn't just an ethical objection. It's a structural problem. If you can offload accountability downward by adding an AI to your review process, you've created a system with no real reviewer and a convenient scapegoat.
Amazon Did This Too, and Its Response Was Worse
This pattern isn't unique to anonymous Indian startups. Amazon went through a version of it last December, at a scale a little harder to dismiss.
Kiro, Amazon's AI coding agent, launched in July 2025. An engineer, apparently without realizing they were in a production environment, used Kiro on a CloudFormation task. Kiro triggered a teardown-and-recreate. AWS Cost Explorer in the Mainland China partition went down for roughly 13 hours. It was the second AI-related service incident in as many months.
Amazon's response was the interesting part. The company published a post saying the events were "not linked to agents" and instead "stemmed from a misconfigured role, the same issue that could occur with any developer tool, AI powered or not." The Financial Times called it an AI-caused outage. Amazon called it user error.
The Register's take was sharper: "Their AI is implicated in deleting production infrastructure, and their crisis communications team's first instinct was to find a human and hurl them under the closest bus. AWS would rather have the world believe their engineers are incompetent than admit their artificial intelligence made a mistake."
Same pattern. Different scale. The AI does something that breaks things. A human takes the blame. The tool's reputation stays intact.
The Legal Situation Is Not Reassuring
There's no established legal framework for what happens when AI-generated code causes a failure. That's not an exaggeration. It's the actual state of things.
MBHB, a law firm that covers IP and tech, laid this out plainly in 2025: copyright on AI-generated code requires substantial human creative input, specifically iterative prompting, editing, and refining of the output. If the human mostly pressed "accept" on AI suggestions, the copyright position for that code is murky. And liability for failures? Even murkier. The article covers two fundamental questions, who owns AI code and who is responsible when it fails, and arrives at roughly the same answer on both: we don't have good rules yet.
Meanwhile, the scale is not small. At Meta's LlamaCon in April 2025, Satya Nadella said that 20 to 30% of Microsoft's code is now AI-generated, and Mark Zuckerberg projected that AI would handle half of Meta's coding work by 2026. That's mainstream software development, operating without any clear answer to the question of what happens when it goes wrong.
AI tool makers disclaim liability in their terms of service. Companies mandate the tools and measure developers on their usage. When something breaks, the employment-at-will doctrine says you can fire whoever you want for poor performance. So the liability flows to the person with the least power in the chain, and the people who actually set up the conditions for failure walk away.
What Accountability Looks Like Without Regulation
Government regulation isn't the answer here, unless there are real externalities like people getting hurt. The problem isn't a missing law. It's missing internal policy, and broken incentives.
A few things companies could actually do, if they wanted to:
Code review policy with teeth. If AI generates the code, a human has to read the code before it merges. Not "review it with AI." Actually read it. The manager who merged that PR at the Indian fintech startup didn't do a code review. They did an AI-summary approval. Those are not the same thing.
Manager accountability that's explicit. If a manager approves a PR, the manager should be on the hook for what's in it. That's not written down in most places, and it should be. You can't outsource that accountability to a language model.
Start with the incentives. Was anyone at this company actually told to review the code? Or were they told to ship fast and measured on Cursor usage? If the incentive structure rewards speed and AI adoption but never mentions code review, you've told people what you value. Don't act surprised when they optimize for it.
Every human in this chain failed. The junior dev submitted code he didn't fully understand. The manager approved code he didn't read. The company built a system that incentivized exactly this outcome and then punished the person at the bottom when the system produced the result it was designed to produce.
Firing him was wrong in principle. But the role he was filling is already shrinking, and not because of his performance.
Who Is Accountable When AI Writes All the Code?
Here's the uncomfortable part. AI is good. It's getting better. But it's not at the point where anyone should trust the outputs blindly. Not the developer, not the manager, not the company.
The junior dev should have understood his code better. Or, if he couldn't understand it, he should have known how to test it. That's a real failure. But the manager should have reviewed the code instead of rubber-stamping an AI summary. That's a bigger failure, because the manager had the authority and the responsibility. And the company should have built a process where someone, somewhere in the chain, was actually expected to verify what was shipping. That's the biggest failure, because it's the one that created the other two.
Every human in this story had a chance to catch the problem. None of them did. The AI didn't fail. The humans failed to do the part that's still their job.
It will happen again. Amazon's situation showed that even a company with the resources and engineering culture of AWS can end up in the same pattern. The pattern doesn't care about company size.
But this points at something bigger than just this incident. We're moving toward a world where the "coding" role isn't about writing code. It's about testing and validating code. The person in the manager's position, the one who reviews and approves, becomes the critical role. The person who coordinates what gets built could be automated. What's left is the person who makes sure the code does what it's supposed to do.
Given that, the junior dev's role was already shrinking. Firing him was a crock of a decision on principle, but the company was going to need fewer people in that seat regardless. What they actually need is to re-evaluate their entire process for a world where AI writes most of the code and humans are there to catch what it gets wrong.
If you're a developer right now, the practical advice is not to stop using AI tools. They're useful. What you want is to be able to defend what's in the code you submit. If you can't explain it, learn how to test it. That's your protection. This guide to AI code assistants covers the tools, but the human layer is the part no tool automates.
If you're a manager, the question worth sitting with is this: when your AI review approves a pull request, what exactly did you just sign?
FAQ
Who is responsible for AI code when it causes a production failure?
Currently, this is largely unresolved. Legal analysis from 2025 shows no established framework for AI code liability. AI tool makers disclaim responsibility in their terms. Copyright for AI-generated code requires substantial human creative input to attach. The practical result is that liability falls on whoever is closest to the failure in an employment context: usually the developer who submitted the code.
Can an employer legally fire someone for submitting AI-generated code that broke production?
In most at-will employment jurisdictions, yes. An employer can terminate for code quality issues regardless of how the code was produced. The tools used don't change the outcome standard.
Should the manager be held equally accountable for approving an AI-reviewed pull request?
The engineering community largely says yes. Whoever merges a PR takes responsibility for what's in it. Using AI to review doesn't transfer that responsibility. Reddit commenters agreed: "the onus of the bug is on them."
What happened with Amazon's Kiro AI agent and the AWS outage?
Kiro triggered a CloudFormation teardown in production, taking AWS Cost Explorer down for 13 hours. Amazon called it user error. The Financial Times and others called it an AI-caused incident.
Related Resources
Evidence and Methodology
This article draws from the original r/DevelopersIndia Reddit post as covered by India Today, Hindustan Times, Economic Times, and Storyboard18. The Amazon/Kiro incident is sourced from PCMag and The Register. Legal analysis references MBHB's 2025 coverage of AI code ownership and liability. AI code adoption statistics are sourced from Business Insider's coverage of the Nadella/Zuckerberg conversation at LlamaCon (April 2025). All claims are linked inline to their sources.
Changelog
| Date | Change |
|---|---|
| 2026-03-02 | Initial publish |




Comments
No comments yet. Be the first.