The headline sounds like a punchline: top engineers at Anthropic and OpenAI say AI now writes 100% of their code. But they are not joking, and they are not alone.
Boris Cherny, who runs Anthropic’s Claude Code product, says he has not written code by hand for more than two months. Every pull request he shipped on two recent days – 22 one day, 27 the next – was “100% written by Claude” with no manual edits. Across Anthropic, he estimates “pretty much 100%” of code is AI‑generated, while the company’s official line is a still‑staggering 70–90% of code written by models, and around 90% of Claude Code’s own codebase written by Claude Code itself.
An OpenAI researcher posting as “Roon” puts it even more bluntly: “100%, I don’t write code anymore.” Programming, he argues, “always sucked… I’m glad it’s over.” Fortune
Layer on Anthropic CEO Dario Amodei’s timeline – AI writing 90% of code within 3–6 months, and “essentially all” code within a year – and you get the existential question underneath your headline: what happens to software development as a job, not just as a skill?
This is not a thought experiment anymore. It is a labor story.
The Hype, The Reality, And The Missing 70%
A useful starting point: separate the lab narrative from the industry baseline.
Inside Anthropic, Cherny says AI writes “pretty much 100%” of code for many engineers. Company spokespeople dial that back to 70–90% across the org. A recent Science paper looking at GitHub Python functions estimates about 29% of U.S. functions are AI‑written, with lower rates elsewhere. Microsoft and Salesforce both peg their AI‑generated code share at around 30%.
So: at frontier labs, a lot of code really is AI‑authored. In the Fortune 500 and across the broader ecosystem, we are still closer to one‑third.
That gap matters. It tells you we are not looking at a neat, universal “robots took the jobs” jump, but a messy diffusion process: elite teams with cutting‑edge tooling and very strong incentives to hype their own products, moving faster than everyone else.
It also hints at the power asymmetry here. Anthropic and OpenAI are not just using AI coding tools, they are selling them. When the head of Claude Code says his tool does 100% of his work, that is both a workflow description and an ad.
The interesting question is less “is that technically true?” and more “what happens once CEOs and boards start treating that as the new benchmark for productivity?”
The First Casualties: Junior Jobs And The Ladder Itself
Fortune’s reporting notes what every hiring manager has seen: entry‑level software roles are drying up just as AI‑generated code surges.
This is the uncomfortable part of the “AI makes us all more creative” story. For decades, the software profession has run on an informal apprenticeship model:
- Juniors write glue code, boilerplate, unit tests.
- They learn the domain by living in that low‑risk, tedious work.
- Over a few years, some graduate into system design and architecture.
Now the exact tasks that once formed the first rungs of the ladder are the easiest to automate. A good coding model is essentially a tireless junior engineer that never complains about ticket grooming.
Cherny is blunt about how that shifts hiring: his team now prefers generalists and de‑emphasizes deep expertise in particular stacks, because “the model can fill in the details.”
That sounds efficient from the point of view of a well‑capitalized AI lab. From a labor‑market perspective it is brutal. If you erase the grunt work but keep the expectations of senior judgment, where, exactly, are people supposed to learn?
A progressive lens here pushes us to ask: whose careers get shortened, whose never start, and who gets to own and govern the tools that replaced their on‑ramp?
Software Engineering Was Never Just Typing Code
There is another tension in this story: the executives talking about 90–100% code automation are not claiming to have automated the parts of software engineering that involve power, judgment, or blame.
Listen closely to what they actually describe:
- Amodei imagines humans “feed[ing] the AI models with design features and conditions” even as code authoring gets automated.
- Windows Central notes that Instagram co‑founder and Anthropic CPO Mike Krieger expects engineers to transition into “double‑checking AI‑generated code rather than writing it.”
- Matt Garman at AWS similarly imagines developers “not coding” but upskilling into more AI‑oriented roles.
So the pitch is not “no more software engineers.” It is “software engineers as prompt architects, system designers, reviewers, human‑in‑the‑loop quality control.” Part product manager, part auditor, part babysitter for very confident autocomplete.
That shift has three big implications:
- Software becomes more like management consulting. Less time in the weeds of syntax, more time navigating stakeholders, ambiguity, and organizational politics. The value is in deciding what to build and why, not how to implement a function.
- Risk and responsibility get fuzzier. If AI agents generate the code, but humans “approve” it in review, who is liable when the code fails, leaks data, or quietly discriminates? We are re‑running the platform vs publisher debate from social media, but at the level of infrastructure, safety‑critical systems, and public services.
- The work gets more elite, not less. The people who can think in systems, reason about edge cases, and understand the socio‑technical context of their tools become more valuable. Everyone else becomes… replaceable.
In that sense, AI is not killing software engineering. It is strip‑mining it for its most easily codified parts and leaving behind a narrower, more rarefied profession.
The Inequality Story Under The Productivity Story
Executives sound remarkably aligned on this: AI is a jobs engine, if you are the right kind of worker.
Windows Central highlights Microsoft’s own Work Trend Index, which finds a surge in hiring for “skilled workers with an AI aptitude” and a 142x increase in LinkedIn profiles listing AI skills like Copilot and ChatGPT.
On paper, that is encouraging. In practice, it is a recipe for deepening inequality within the profession:
- Senior engineers who can ride the AI wave become “10x” multipliers, managing agents and shipping more features than ever.
- Mid‑career developers who can rebrand as AI‑augmented product builders do well.
- New graduates, bootcampers, and self‑taught coders find fewer rungs on the ladder and more job postings that translate to “must already be expert and must already know AI.”
From a democratic perspective, this is not just an HR issue. It is a question of who has economic security in an AI‑saturated economy, whose voices shape the tools that govern more and more of public life, and who is pushed into the growing pool of precarious workers in adjacent fields.
The people building the systems that manage benefits, handle voter registration, or allocate police resources should not all come from the same narrow, already‑privileged slice of the labor market.
Regulators Are Behind. Institutions Cannot Afford To Be.
Right now, most of the guardrails around AI‑generated code are internal: corporate guidelines, tooling constraints, and informal norms on developer forums.
On Hacker News, developers respond to the Fortune piece with healthy skepticism: pointing out conflict‑of‑interest hype, complaining about messy AI‑generated code that is hard to maintain, or arguing that relying fully on LLMs for code suggests you have “stopped innovating.” as per Hacker News
That pushback is good, but it is not policy.
If you care about democratic norms and the rule of law, you want at least four things to happen fast:
- Transparency requirements for critical systems. If you deploy AI‑generated code in infrastructure that touches health, finance, public services, or elections, regulators should know. At minimum: documented testing, traceability of changes, and clear lines of accountability.
- Labor standards for AI‑intensive roles. If you are going to replace entry‑level coding work with AI in publicly traded companies or major government contractors, you should not be allowed to quietly erase pathways into the middle class. Apprenticeship mandates, training subsidies, or targeted hiring incentives are all on the table.
- Public options for AI tooling. Leaving AI coding agents entirely in the hands of a few private labs invites the same concentration of power we saw with social platforms, just at a more structural layer. Public or open‑source alternatives – and public investment in education around them – are not just nice‑to‑haves. They are guardrails against capture.
- Professional norms that treat AI as instrumentation, not oracle. Civil engineering did not stop existing when finite‑element software arrived; it changed the curriculum and the liability regime. Software engineering needs something similar: accreditation that bakes in AI literacy and insists that “the AI did it” is never an excuse when things go wrong.
If the U.S. and other democracies do not move on this, someone else will set the norms: the handful of firms that own the models, the autocratic governments eager to deploy them without friction, or the market, which is not famous for prioritizing long‑term social stability.
So What Should Developers Do?
If you are looking at that “100% AI‑written code” claim and wondering whether to bail out of tech, it is worth zooming out from the hype cycles and thinking structurally.
Based on where things stand today:
- Coding as pure implementation is in trouble. If your day job is translating Jira tickets into CRUD endpoints with little design work, you are in the blast radius.
- Systems thinking, domain expertise, and judgment are defensible. The more your value comes from understanding health regulations, market microstructure, grid stability, or the messy reality of a particular community, the less easily you are replaced by the next Opus or o1 variant.
- AI fluency is not optional anymore. The people doing best inside this transition are the ones treating models as power tools, not competitors: using them to explore design spaces, generate test suites, and refactor legacy code while keeping human ownership of the architecture.
The irony is that this is the opposite of “programming is dead.” It is programming finally being forced to grow up into a civic profession, with consequences and duties that look less like leet‑speak and more like law and medicine.
The tools got better faster than the institutions. The question, for both engineers and policymakers, is whether we let that gap become a chasm that swallows a generation of workers, or whether we use this moment to rebuild the ladder while we are still standing on it.
