Code is cheap now. Software isn't.

The barrier to writing code collapsed. The coding agent market exploded from zero to $3 billion in five years. Production teams now deliver in 12 weeks what took 6 months. This article shows how the economics changed, what we've learned shipping code agents into production, and why this moment is not the end of anything. It's a beginning.

22.01.2026·Agentic AI··14 min read·
Code is cheap now. Software isn't.

When factories changed shape

When interchangeable parts met electric power, factories did not simply move faster. They changed shape. Lines decoupled from a single drive shaft. Machines could be arranged into cells. Setup times fell, maintenance became predictable, and buffer stocks shrank. Overall Equipment Effectiveness rose because availability, performance, and quality moved together. Costs dropped. New product lines became viable.

The same rewrite is happening in software. Once agentic coding became the default in real teams, weekly code merges rose sharply, revert rates held steady, bug work fell, and the scope of work did not shrink. This is not a cosmetic tool upgrade. It is a change in the production function for digital work.

The $3 billion revolution that didn't exist five years ago

agentic-development-market-chart
Agentic development market chart

Look at the chart. Five years ago, only Microsoft's GitHub Copilot existed as an experimental preview. Today, eight major players compete for a market growing at unprecedented velocity. Claude Code's vertical surge from zero to $1 billion ARR in just nine months. Cursor's hockey-stick acceleration. OpenAI Codex's late but aggressive entry.

This isn't incremental improvement. This is a rewrite of the production function for digital work.

Notice the inflection points. They cluster around October 2024 to July 2025. That's when enterprises moved from pilots to production. When CTOs stopped asking "should we?" and started asking "how fast can we scale?" When the early adopters' productivity gains became impossible to ignore.

Microsoft GitHub Copilot pioneered the space, reaching $1.1 billion ARR through steady enterprise adoption. But the real story is the velocity of the newcomers. Claude Code compressed what took Copilot three years into nine months. Cursor built a $900 million business essentially overnight. Even the late entrants are tracking toward nine-figure ARRs within their first year.

What this means for your business: Your consulting firms or vendors might still be quoting six-month projects with armies of developers typing code by hand, charging 2019 prices for 2026 work. If they've already adapted to this new reality, wonderful. If not, it's worth checking. Companies like paterhn leverage these exact tools to deliver the same scope in 12 weeks at 30-40% of the cost. When production costs drop this dramatically, everyone wins, especially you, the customer.

Every month you wait is another month paying yesterday's prices for tomorrow's technology. Your next RFP should demand evidence of agentic development. Your next vendor should show you working software in 30 days, not slides in 90.

The question isn't whether these tools will transform your industry. They already have. The question is whether you'll capture that value or watch it pass by.

The shift from syntax to semantics

For decades, software meant translating intent into syntax by hand. A developer sat at a keyboard and shaped logic line by line. Agents alter that posture. You express a goal in plain language. The system plans, proposes code, runs tests, and presents diffs for review. The human role moves from typing to specifying and evaluating.

This does not erase engineering; it reshapes it. The machine takes repetition, boilerplate, and refactors. The human supplies context, constraints, and judgment. Teams that win in this model excel at writing precise intent, anticipating failure modes, and deciding quickly with evidence.

A boundary has moved. Non-engineers can complete short-horizon, verifiable tasks by instructing agents in plain language. Designers can stub features. Product managers can refactor a query or draft tests. Analysts can stand up simple services. That is real access. Yet the largest lift lands with people who already understand architecture, coupling, and trade-offs. The skill is no longer the keystroke. The skill is clarity of instruction and quality of evaluation.

The implication is blunt. If your organisation cannot describe what it wants in crisp, testable terms, the tool will not rescue you. If your leaders and principal engineers can, the tool multiplies their reach.

The great amplification

The dream was simple. Give everyone a conversational interface and everyone can code. The reality is sharper. Agentic coding delivers the highest lift to the most experienced people. It rewards those who already think in systems and constraints. That is not a flaw. That is how leverage behaves.

A November 2025 study, AI Agents, Productivity, and Higher-Order Thinking: Early Evidence From Software Development, confirms what practitioners have observed. Experienced workers accept agent output at significantly higher rates than juniors. They write tighter prompts with minimal ambiguity. They decompose work into agent-compatible units. The gap between senior and junior performance widens before it narrows.

And they're getting better every day. Every time an engineer accepts code, that's a signal. Every rejection. Every edit. Millions of people, continuously training the next version. That's RLHF at planetary scale.

The capability matrix

Junior
Senior
Staff
Capability
2/109/10
9/10
10/10
Clarity
2/10
8/10
9/10
Delegation
0/10
3/10
9/10
Orchestration
0/10
3/10
9/10

When capability is commoditized, everything else becomes the differentiator.

At paterhn, ~70% of our commits are now agent-assisted. Not finished code. Agents break the blank page problem, generate scaffolding, and propose solutions. Engineers review, direct, and own the merge. The agent doesn't ship. The engineer ships.

But 100% of the architecture, judgment, and accountability is still human. That ratio hasn't changed.

This is what "code is cheap, software isn't" looks like in practice. You can generate enormous complexity for free. The discipline is knowing what to keep.

Call it cognitive leverage. Tools amplify what you have, not what you lack. A senior engineer who writes a clear plan, specifies invariants, and enumerates edge cases will get aligned output quickly. A junior engineer who cannot frame the problem will still prompt for code that compiles, yet fails where it matters.

Design for this. Hyper-empower your best people. Give them ownership of the highest-leverage streams. Pair them with hungry juniors who learn by working through real evidence and review. The juniors level up faster than ever before because the feedback loop is tighter. The agents handle the syntax so the mentorship focuses on judgment.

Spend on planning and evaluation, not only on seats and licenses. In a world where agents draft code on request, the scarce resource is architectural judgment and the ability to decide with data.

A concrete analogy from manufacturing: OEE for software

Operations veterans measure manufacturing with Overall Equipment Effectiveness. OEE equals availability times performance times quality. Miss one factor and throughput collapses. Agentic development changes all three factors in software terms.

Availability rises because work starts as soon as intent is clear. Agents do not wait for a stand-up or a sprint boundary. Queues and idle time shrink.

Performance rises because agents generate scaffolds, tests, and refactors at machine speed. Your best people stop losing afternoons to tasks that require little judgment.

Quality holds when tests and review stay intact. Evidence from real teams shows no penalty in reversions and fewer bug-fixes after release. You get more good merges without paying for them in rework.

Raise all three and your unit cost drops. When the cost of a validated change falls, you can fund more parallel bets for the same budget. You can kill weak bets sooner and recycle spend into the winners. Portfolio returns look different because the learning cycle is cheaper. This is how agentic development shows up in a P&L.

Real-world cases

Compliance intelligence layer

The client operates network infrastructure for digital asset exchanges. Their problem: counterparty due diligence. AML. Financial crime prevention at scale.

Before: Sanctions-only checks, hours per counterparty, point-in-time reviews, manual re-assessment.

After: 13 agents. 21 seconds. Continuous monitoring. Fully auditable.

Regulatory intelligence for a Swiss bank

We delivered regulatory intelligence and control testing infrastructure to a Swiss financial institution. The board refused a black box. The audit committee required full traceability, Swiss or EU residency, and clear evidence that quality would hold.

The traditional plan called for six to nine months. Long requirements, slow integration, and a big reveal. We offered a different path. We scoped a twelve-week Proof of Value on one regulation set and one control family. We instrumented the existing document management and ticketing systems. We built a typed compliance knowledge graph and added a graph neural model to predict which controls a change would touch. The agent generated a first-draft impact note with clause-level citations. Reviewers remained in control. Every prompt, plan, model version, and decision was logged.

The technical edge: we run the GNN and LLM in the same loop. Normally you use GNNs for structured data and LLMs for semantics. Separate systems. We put them in the same vector space. A name on a sanctions list is just a string. But the LLM understands that "Steinberg Handel AG Zug" and "Stone Mountain Trading DMCC Dubai" might share a beneficial owner. That semantic similarity and the graph structure live in the same embedding, the same forward pass. For AML and compliance, this changes what's possible.

By week ten, median time to first draft fell from three hours twenty minutes to fifty-eight minutes on the scoped set. Reviewers accepted most drafts with minor edits. Citation precision met audit expectations. Queue times for priority items nearly halved.

The hidden win was option value. Once the first scope met threshold, leadership could extend to adjacent regulation families with confidence. Evidence replaced opinion. They learned in twelve weeks, spent less, and moved first.

The economics in practice

Team size
Time
Cost
Traditional enterprise build (pre-agents)
8-12
9-12 months
$1M-$1.5M
Code agents + orchestration (today)
3-5
8-12 weeks
~$300K-$500K

The unit economics changed. More people can specify intent now. That's real. But production still requires engineering discipline and controls. That hasn't changed.

The real metric here isn't cost. It is velocity

Your next build: a leader's blueprint

Scrap your six-month epics. Embrace 12-week value cycles. Pick one workflow with measurable business value. Lock the KPI and baseline in the first two weeks. Deliver a minimum viable agent by week eight. Keep the integration surface small: one API, one queue, one lane. Run in shadow or to a small cohort through week eleven. Compare against the baseline you agreed. Decide at week twelve: scale, iterate, or stop. Publish the evidence either way.

Your RFP is obsolete. Ask for a working slice in 30 days. Require a running slice on your data by week four. Shadow mode is acceptable. Demand a measurement plan in writing. Name the target KPI and how it will be tracked. Price the present: vendors must show how agents compress scaffolding and refactors, not just list roles and hours.

Redefine "done." Done means a working slice, an evidence log, and movement on the agreed KPI. Evidence includes prompts, diffs, tests, approvals, model versions, and feature flags. If you cannot show it to your board or auditor without flinching, it is not done.

Pair senior judgment with scope that matters. Put your most experienced builders on the highest-leverage streams. Agents amplify their planning and evaluation. Teach plan-first habits to the whole team. English is the interface, clarity is the tool. Keep quality steady by leaving tests and review intact. Speed should not buy rework.

The builder test: 10 questions to price the present

Use this checklist in your RFPs and interviews. If a vendor hesitates on these points, you know what they are selling.

  1. Working slice by week 4? Demand a running agent on your data, not slides. Shadow mode is acceptable.
  2. KPIs and baselines in writing? The SOW must name the metric that moves by week twelve and how it will be tracked.
  3. Evidence from day one? Require queryable logs for prompts, diffs, tests, approvals, model versions, and feature flags.
  4. Ownership explicit? Source code must be in your repository, with infrastructure as code, and clear IP ownership for custom components.
  5. Residency options real? Confirm Swiss, EU, or sovereign cloud options with least privilege by default.
  6. Minimal viable integration? Is the plan the smallest, safest path to value, or a giant plumbing project?
  7. Fixed-fee Proof of Value? Insist on clear acceptance criteria and a go/no-go gate at week twelve, tying budget to outcomes.
  8. Subcontracting transparency? If an advisor brings a builder, meet them. Put their deliverables in the contract by name.
  9. Pricing reflects agentic reality? If quotes assume hand-typed scaffolds and slow refactors, you are paying yesterday's prices.
  10. Weights you own? If the system learns from your data, who owns the resulting model? Your tacit knowledge should end up in weights you control, not training someone else's foundation model.

Ownership isn't just about code

Satya Nadella stood at Davos and named what he called "the least talked about topic in AI." Model sovereignty.

His exact words: "If you're not able to embed the tacit knowledge of a firm in a set of weights, in a model you control, by definition you have no sovereignty. You are leaking enterprise value to some model company somewhere."

Notice the word he chose. Not knowledge. Tacit knowledge.

Tacit knowledge is the stuff that isn't written down. It's how your best compliance officer knows which counterparty relationships feel wrong before the data confirms it. It's how your senior engineer knows which architectural decisions will cause pain in three years. It's the institutional memory that walks out the door when someone retires and can never be fully replaced.

That knowledge can now be captured. Encoded. Made operational. But only if you own the resulting model.

Here's where the conversation goes wrong. People hear "sovereignty" and assume everything must run on-premises. Every model. Every inference. Full air-gap.

That's not the point.

Most tasks don't need sovereignty. Summarizing a public document. Generating boilerplate. Answering common questions. Use the APIs. They're fast, cheap, and good enough.

But your crown jewels? The processes that encode genuine competitive advantage? The workflows where your tacit knowledge is the moat? Those should never train someone else's model.

The vendors who haven't adjusted their pricing are also the ones capturing your tacit knowledge with every API call. Every prompt teaches their model. Every correction refines their weights. Your competitive advantage becomes training signal for a foundation model that serves your competitors next quarter.

The question isn't just what you pay for software. It's who owns what your data becomes.

Pricing the present, not the past

Agentic coding changed the cost structure. Your proposals should reflect that reality. Expect fewer raw engineering hours for the same scope. Expect a higher share of effort in specification, evaluation, and safety. Expect earlier decisions with clearer evidence.

Here is the buying rule that protects your capital. Ask for a working slice in thirty days, an evidence log you could present to your board, and a decision gate at week twelve. If a vendor hesitates, you know what they are selling. If they agree, you know who is ready for the new factory.

The age of the builder is here

Picture a production line where changeovers take minutes, not hours. Smaller batches become economical. Planners stop fearing experiments. That is the line you buy when you fund agentic development.

As code generation becomes easier, the value of true architectural vision and expert evaluation rises sharply. The future will not belong to those who simply use AI. It will belong to those who orchestrate it across workflows, testing, governance, and handover; those who can turn a strategy into a working system in twelve weeks, then scale it with confidence.

Two messages

For executives: The window is open. The costs dropped. The tools work. But windows close.

If you're not building production AI capability now, you'll be buying it from someone who did, at their price.

Stop paying yesterday's prices. Fund the shortest safe path to proof. Think big. Start small. Ship weekly. Measure honestly.

For engineers: Especially the young engineers who might be feeling some despair right now, wondering if what they spent years learning just became obsolete.

It didn't.

This is not the end of software engineering. This is our big bang.

All software becomes generative. Most software becomes generated. Plan accordingly.

Strategy is hypothesis. Shipping is proof.

Key Takeaways

The economics changed, but not what you think. Code generation got cheap. Architecture, judgment, and accountability didn't. 70% of commits can be agent-assisted, but 100% of production discipline is still human.

Your vendors are probably overcharging you. The coding agent market hit $3B because these tools work. If your vendors haven't adjusted their pricing, you're paying 2019 rates for 2026 capability.

Sovereignty means owning the weights, not just the data. Your tacit knowledge can now be captured in models. If those models live in someone else's infrastructure, your competitive advantage becomes their training data.

AI AgentsAgentic AICustom LLMsMulti Agent Systems

Related Articles