Expert knowledge, tribal knowledge, and agent labor

The seat was the wrong unit of value. The next operating model is built around expert knowledge, tribal knowledge, agent labor, evidence, and accountable human judgment.

04.05.2026·Agentic AI··10 min read·
Cover image — Expert knowledge, tribal knowledge, and agent labor

Software entered 2026 with a repricing too large to dismiss as sentiment.

By late March, the iShares software ETF was down more than 20%, with several reports putting the market-cap damage near two trillion dollars. Salesforce trades below 13 times forward earnings, down from a ten-year average of 45. Adobe sits below 10 times. The financial press calls it the SaaSpocalypse. The name is useful. The diagnosis is too theatrical.

The market is not saying software has no value.

The market is saying the seat was the wrong unit of value.

For twenty years, enterprise software priced itself around a simple assumption: value is created by humans operating software. More employees meant more users. More users meant more seats. More seats meant more revenue.

That model works when software is the place humans go to do work.

It breaks when agents start doing the work.

A compliance team does not need fifty logins. It needs screenings completed, evidence preserved, audit trails closed, and decisions prepared for accountable human review. The value is not access. The value is completed work with proof attached.

That is the repricing.

The interface was never the asset

This does not mean every SaaS company disappears.

Systems of record remain. Salesforce, Snowflake, ERP, data warehouses, core banking systems, HRIS, ticketing systems, and regulatory archives still matter. They hold state. They store history. They coordinate parts of the business that cannot be regenerated every morning.

The layer under pressure is different. It is the generic workflow layer. The dashboard layer. The place where a company bends its own operating model to fit the assumptions of someone else's software.

Vercel's CEO Guillermo Rauch described the pattern clearly when he wrote that almost every SaaS app inside Vercel had been replaced with a generated app or agent interface. The important part was not that Vercel stopped using systems of record. It did not. The important part was why the generated systems worked better:

"UI is a function f of data, and that f is increasingly becoming the LLM."

Vercel's own customer-health problem did not fit the ontology of Salesforce. So they generated software that did. That word matters.

Ontology is how a company understands itself. Its objects, relationships, exceptions, rules, escalation paths, and judgment boundaries. Generic SaaS asks the company to fit the vendor's ontology. Agentic systems let the software fit the company's ontology.

That is the real shift.

Not SaaS to no software. Generic workflow software to domain-specific work systems.

The three layers every company runs on

Strip a company down to what actually produces output and you find three layers. They do not appear cleanly on the org chart. They do not show up in the SaaS bill. But every serious operation runs on the interaction between them.

The first layer is expert knowledge. The formal, domain-specific knowledge that distinguishes a compliance analyst from a logistics planner from a clinical trial coordinator. Regulations, policies, SOPs, risk taxonomies, process maps, technical standards, legal definitions. Teachable. Documentable. It is what most enterprise software was built to support.

The second layer is tribal knowledge. The part that is not written down. The senior compliance officer who knows that a specific ownership pattern feels wrong before a rule triggers. The plant manager who hears a machine and knows it will fail tomorrow. The customer success lead who knows which accounts need a call before a release. The risk officer who has seen enough edge cases to know when a clean-looking file is not clean. Tribal knowledge is why two teams with the same SOPs produce different results.

The third layer is the execution substrate. For most of the software era, humans were the substrate. Software organized the work, routed the work, stored the work, and reported on the work. The human still performed the operational labor.

That is what changed.

Agents are becoming a new execution substrate for the repetitive, structured, verifiable parts of knowledge work. Humans do not disappear. They move up the stack, toward judgment, accountability, exception handling, and final decision-making.

That is the new operating model:

  • Expert knowledge defines the work.
  • Tribal knowledge improves the work.
  • Agent labor executes the work.
  • Humans close the loop.

What changed is what software is now allowed to do

The shift is already visible in engineering.

Linear says coding agents are installed in more than 75% of its enterprise workspaces, agent-completed work grew 5x in three months, and agents authored nearly 25% of new issues. Salesforce introduced Agentic Work Units to measure tasks completed by agents rather than tokens consumed. NVIDIA framed GTC 2026 around AI factories, agentic systems, inference, and the infrastructure required to turn compute into intelligence at scale.

These are different signals from different parts of the stack, but they point in the same direction.

The unit is moving from access to work.

Tokens are not the outcome. Seats are not the outcome. A generated interface is not the outcome. The outcome is the work completed, the evidence produced, and the decision made possible.

That is why the SaaSpocalypse framing is incomplete. The market is not killing software. It is sorting software by whether it owns the work layer or merely sells access to a workflow.

Agent labor without knowledge is a demo

The mistake many companies will make is assuming a frontier model plus data access equals an agent system.

It does not. A general-purpose model connected to a CRM is a chatbot with permissions. Useful, sometimes. Dangerous, often. Durable as an operating model, rarely.

A real agent labor system has to do four things that a generic model does not do by itself.

First, it has to encode expert knowledge in a form agents can act on. Not as one long prompt. As specialist roles, retrieval structures, validation checks, domain objects, policies, and workflows that match how the business actually thinks. Compliance does not think in free text. It thinks in entities, jurisdictions, obligations, ownership chains, thresholds, evidence standards, and exceptions. Manufacturing does not think in tickets. It thinks in routings, tolerances, and bills of materials. Generic agents do not have this. They guess.

Second, it has to capture tribal knowledge without pretending tribal knowledge is mystical. The signal is in the work: overrides, escalations, review comments, rejection reasons, policy exceptions, corrections, second looks, and final decisions. A good agent system records those signals and feeds them back into the customer's operating model. Over time, the system becomes better at that company's work because it learns from that company's judgment.

Third, it has to produce evidence, not just output. Output is an answer. Evidence is an answer plus sources, provenance, actions taken, reasoning path, uncertainty, human review, and a record that can be reconstructed later. In regulated workflows, output without evidence is unusable.

Fourth, it has to keep the human in the right place. The human is not a decorative fallback. The human is the accountable decision-maker. The system should make judgment cheaper to apply, easier to audit, and more consistent across cases. It should not hide uncertainty behind confident language.

Vercel's engineering guidance says the quiet part directly: "Vibing and mission-critical infrastructure don't go together." That applies beyond code. In compliance, finance, pharma, supply chain, and safety-critical operations, agent labor needs governance from the start.

Autonomy without evidence is not production. It is liability at speed.

Compliance is the cleanest proving ground

Compliance makes the shift obvious because the work has legal meaning.

A screening is complete or it is not. An evidence pack is defensible or it is not. An audit trail can be reconstructed or it cannot. A risk classification follows the relevant framework or it does not.

That makes compliance a better proving ground for agent labor than many softer enterprise workflows. The work is repetitive, but not simple. It requires source handling, relationship reasoning, policy interpretation, escalation logic, documentation, and human sign-off. It also has a clear buyer pain: teams face more regulation, more counterparties, more AI governance requirements, more vendor checks, and more scrutiny without matching increases in headcount.

The old answer was another dashboard. That answer is wearing out.

The new answer is agent labor with evidence built in. That is why we are building cmpliance.

cmpliance is not a compliance dashboard with AI added to the side. It is not a new product on an old shelf. It is a new category: agent labor for compliance. Agents prepare the repetitive work, assemble the evidence, reason over documents and relationships, surface the decision points, and preserve the audit trail. The compliance team still makes the decision. The system makes the decision easier to make and easier to defend.

The open loop matters.

Agents do the work. Humans make the call.

What survives the repricing

The companies that survive this software repricing will share a few properties.

They will own a domain ontology. They will understand the objects, relationships, obligations, and exceptions inside a specific kind of work. They will capture expert and tribal knowledge as part of the system, not leave it trapped in people's heads. They will produce evidence, not just answers. They will preserve human judgment, not hide it. They will make model choice a configuration decision, not a dependency that controls the product. They will price closer to work completed than seats assigned.

The companies that struggle will share the opposite pattern: generic workflow software, generic dashboards, generic AI wrappers, and pricing models tied to human attendance.

Code is cheap now. Interfaces are cheap. A basic SaaS app can be cloned in an afternoon.

Systems are not cheap. A system that captures domain knowledge, preserves evidence, learns from human judgment, survives model changes, and produces work an enterprise can defend is still hard. That is where the durable value moves.

Builders, not commentators

paterhn builds production AI systems you own. We have been shipping production Machine Learning since 2001 and foundation-model agent systems since 2019. The thesis has stayed consistent: companies do not need more black boxes, more seats, or more strategy decks. They need working systems on their infrastructure, measured against real operational baselines.

How we build it, in one line:

"We use frontier APIs as development tools. The system we deliver runs on architecture you own, with the expert and tribal knowledge of your firm encoded into it. Your judgment ends up in your model, not theirs."

The agent shift makes that position more important, not less. As agents become capable of doing more work, the scarce resource becomes judgment: what to automate, what to verify, what to expose to humans, what evidence to preserve, what model to trust for which task, and what architecture survives the next model release.

That is the work layer. That is where the next generation of enterprise software will be built. Not around seats. Around labor, evidence, and accountable decisions.

If your operation still pays for repetitive analytical work through headcount and per-seat tools, the practical question is simple:

What would the same budget produce if it bought agent labor instead?

That is the conversation we are having every week with operators and decision-makers. We are happy to have it with you.

Builders. Not consultants. Shipping production agent systems since 2019.

Key Takeaways

The market is not repricing software because software stopped mattering. It is repricing the assumption that value scales with human seats.

The next operating model has three layers: expert knowledge, tribal knowledge, and agent labor.

Generic SaaS forced companies into the vendor's ontology. Agentic systems can fit the company's own objects, rules, exceptions, and judgment boundaries.

In regulated work, output is not enough. The durable product is work completed with evidence, auditability, and accountable human review.

Compliance is the cleanest proving ground because the work is repetitive, evidence-heavy, and legally meaningful.

Agent LaborAgentic AIAI AgentsMulti Agent Systems

Related Articles