The Company as Code
Companies now behave like software: design processes as modules, instrument them, refactor ruthlessly, and blame systems—not people—to build lightweight orgs that learn fast daily.
When you first hear the phrase “a company is code,” it sounds like one of those tech-metaphors people use when they’re trying to make something ordinary feel futuristic. But I think it’s pointing at something real that’s been happening quietly for a while: companies are becoming designed artifacts in a more literal way than they used to be.
Not “designed” the way you design a logo, or a mission statement, or an org chart that immediately becomes fiction. Designed the way you design software: you choose primitives, you define interfaces, you instrument behavior, you run experiments, you refactor, you delete.
Most companies are still run as if they were villages. The software-company view is that they should be run more like systems.
That sounds cold until you notice what’s actually cold about the village model: it’s full of folkways, mysteries, and blame. It runs on “who knows what,” and “how we do things,” and “talk to Sarah, she’s the only one who can fix it.” It produces a lot of moral judgment. When things don’t work, we conclude someone is failing.
Engineers have a different reflex. When something doesn’t work, they assume the system is wrong.
W. Edwards Deming, who spent a lifetime trying to drag management into the 20th century, put it bluntly: “A bad system will beat a good person every time.”
That sentence is almost offensively charitable toward people. It says: don’t romanticize heroics, and don’t pathologize normal human limits. If the system requires constant heroics, the system is broken.
The reason “company as code” is suddenly plausible is that more and more of what companies do has become explicit and executable. Not always in the sense of “a computer runs it,” but in the sense that the work is now routed through tools that create a record, define states, and force decisions into something like a formal language: tickets, pipelines, checklists, versioned docs, workflows, dashboards. Even conversations are increasingly logged, searchable, and linkable. The company begins to acquire something like a runtime.
And once you have a runtime, you can debug.
Processes are programs
The basic idea is almost embarrassingly simple: a process is a program.
It has inputs and outputs. It has preconditions. It has failure modes. It has side effects. If it’s important, it should be readable. If it’s used often, it should be testable. If it’s mission-critical, it should be observable.
When companies say they “run on culture,” what they often mean is that they run on implicit processes no one has written down. That can feel romantic—like artisanal work—but it doesn’t scale well, and it isn’t kind to the people who weren’t there at the beginning.
Software has the same problem. The “culture” of a codebase is what exists in the heads of the people who wrote it. If you want more people to contribute, you have to convert tribal knowledge into explicit interfaces and conventions. Otherwise the code becomes a private language, and the team becomes a priesthood.
There’s a quote often attributed to Donald Knuth: “Programs are meant to be read by humans and only incidentally for computers to execute.”
Whether you care about attribution or not, the idea is right. The easiest way to tell if a piece of software is good is to look at how it feels to read. The easiest way to tell if a company is healthy is similar: watch how it feels to operate. Are the paths through it legible? Can a new person trace cause and effect? Or does it work the way a haunted house works—doors that open only if you know which candle to light?
Once you start seeing processes as programs, a lot of things snap into focus:
Onboarding is a compiler problem. You’re trying to turn a human into a running instance of your system without hand-holding every instruction.
Meetings are sync primitives. They exist because the system has shared state that isn’t updated through a better channel.
Managers are sometimes routers (moving information), sometimes garbage collectors (removing blockers), sometimes performance engineers (finding bottlenecks).
Culture is the default behavior of the system when no one is watching—your implicit error-handling.
And the biggest shift is this: instead of treating “people problems” as primary, you treat the system as primary.
That’s not dehumanizing. It’s the opposite. The village model tends to treat people as the variables you can tweak endlessly: motivate them more, train them more, push them more. The system model says: stop trying to upgrade humans like they’re firmware. If normal humans keep failing in the same places, your design is demanding something unreasonable.
Instrumentation without bureaucracy
Software engineers learned long ago that if you don’t measure anything, you end up arguing from vibes. But they also learned that if you measure the wrong things, you build a machine that lies to you.
Companies are just now learning both lessons at once.
The temptation is to treat metrics as moral verdicts. If you can count it, it becomes a target. If it becomes a target, people start playing games. That’s not because they’re evil; it’s because they’re inside the system you built.
There’s a line from systems thinking that I like because it’s so unsentimental: “The purpose of a system is what it does.” Stafford Beer coined it as a way to cut through intention and look at behavior.
If your performance-review system produces cautious employees, then its purpose—whatever you claim—is to produce cautious employees. If your sales incentives produce churn, then your incentive system is designed to produce churn. If your hiring process produces a monoculture, then that’s what it’s for.
You don’t fix this with speeches. You fix it the way you fix software: by changing the code.
That requires instrumentation, but of a particular kind: measurement that helps you decide what to do next. In practice, the best operational metrics are often boring. They’re latency and error rate. They’re cycle time and throughput. They’re defect rates and rework. They’re the organizational equivalents of “p95 response time,” not “how excited is everyone.”
And sometimes the most powerful “metric” is simply forcing the system to be explicit about state. A ticket is not a metric, but it’s a state machine. It turns “somebody should” into “this is owned.” It makes work addressable.
There’s a popular improvement-science quote that captures this entire worldview: “Every system is perfectly designed to get the results it gets.”
That sentence is secretly liberating. It says: if you don’t like your outcomes, you don’t need to find new people with better souls. You need to redesign the system.
Interfaces, ownership, and Conway’s Law
What makes software scale isn’t brilliance; it’s modularity.
A small group can build almost anything if they can hold the whole thing in their heads. Scaling begins when you can’t. Then the question becomes: how do you divide work without creating chaos?
Software answers: modules and interfaces.
Organizations stumble into the same answer. They call it ownership, responsibility, autonomy, clear roles. But what they’re groping for is the same thing: boundaries where decisions can be made locally, and contracts that prevent constant coordination.
The reason this matters is captured by Conway’s Law, originally stated by Melvin Conway in 1968: organizations that design systems tend to produce designs that mirror their communication structures.
People in software summarize it as “you ship your org chart,” because that’s what it feels like when you inherit a system full of awkward seams that correspond exactly to internal politics.
What’s interesting is that Conway’s Law can be read in two opposite ways:
As a curse: “No matter what we try to build, the org’s dysfunction will leak into it.”
As a design tool: “If we want better systems, we must design better communication structures.”
If you take “company as code” seriously, you stop treating your org chart as a political artifact and start treating it as architecture. You ask: what modules do we need? What are the interfaces? Where should decisions live? Where do we want tight coupling, and where do we want loose coupling?
This is also where the metaphor stops being metaphor and becomes literal. A company that can’t define interfaces is a company that can’t scale. It will become meeting-shaped, because meetings are what you use when you don’t have interfaces.
Refactoring: the missing management skill
Most management advice assumes processes are permanent. It talks about “implementing” something, as if the hard part is installing it, and then it runs forever.
But the most important fact about organizations is that they drift. Every process accumulates barnacles. People route around problems. Exceptions become normal. The thing you designed is not the thing you’re running.
In software, we have a name for the skill of dealing with drift: refactoring.
Refactoring is not rewriting. It’s changing structure without changing behavior—at least at first. It’s paying down complexity so you can move faster later. It’s also a kind of honesty: admitting that yesterday’s design was built for yesterday’s constraints.
Companies are bad at refactoring because refactoring feels like failure. If you change a process, someone has to admit it wasn’t perfect. And in companies, admitting imperfection often has political cost.
Software engineering has the opposite norm. If you never refactor, you’re negligent.
There’s a line from C. A. R. Hoare that captures the deep reason refactoring is hard: there are two ways to design something—make it so simple there are obviously no deficiencies, or make it so complicated there are no obvious deficiencies. The first way is much harder.
That applies to organizations too. You can build a company full of complicated processes that look sophisticated, and the deficiencies will be hard to see because everything is hidden behind complexity. Or you can build something simple enough that when it breaks, you can see where.
The first kind of company feels “enterprise-ready.” The second kind is the one that can keep learning.
A software-minded company treats processes as provisional. It’s not loyal to them. It treats them as tools. If a process doesn’t work, you don’t defend it; you replace it.
Even better: you delete it. Deletion is underrated as a form of progress. Most organizations only grow. They almost never shrink in complexity. They accumulate committees the way old codebases accumulate dependencies. Then everyone wonders why everything is slow.
The company-as-code mindset says: if we can’t delete, we don’t really own the system.
“But companies are made of people”
At this point someone usually says: sure, cute metaphor, but companies aren’t code. People aren’t functions. You can’t unit test morale.
This is true—and also strangely irrelevant.
A company is made of people the way a city is made of people. If you redesign an intersection, you’re not pretending citizens are cars. You’re acknowledging that environments shape behavior. You’re trying to reduce accidents without asking everyone to become a saint.
The deepest advantage of the engineering frame isn’t efficiency. It’s compassion.
Blame is the default in badly designed systems. When outcomes are inconsistent and work is ambiguous, the simplest story is “someone screwed up.” The engineering frame gives you a better default story: “what did the system make likely?”
Deming’s quote is, at its core, an anti-blame philosophy.
And Beer’s POSIWID is an anti-self-deception philosophy.
Together they point to a kind of managerial humility that feels rare: stop narrating your intentions. Look at what your company actually does. If you want it to do something else, change the structure that produces the behavior.
This doesn’t eliminate the human part. It relocates it.
In a “village company,” leadership is often about persuasion and status. In a “code company,” leadership looks more like design: choosing constraints, clarifying interfaces, deciding what to optimize, protecting time for deep work, and removing sources of unnecessary conflict.
The human work becomes more subtle: not “make people work harder,” but “make it easier for people to do good work without constant friction.”
Toward companies you can “compile”
The most interesting implication of all this is not that companies can be optimized. It’s that they can be generated.
If you can express a process clearly enough to instrument it, you can often express it clearly enough to automate parts of it. If you can define the contract for a role, you can often define what software can do to support it. If you can specify the state machine of a workflow, you can often build a tool that enforces it gently, the way a type system prevents certain bugs.
And once you start doing that, building a company begins to resemble building a product. You pick a set of primitives—communication channels, decision rights, review loops, hiring filters, escalation paths—and you assemble them into something coherent.
This is what founders do anyway. The difference is that most founders do it unconsciously. They improvise. They adopt rituals because they saw them somewhere. They keep the ones that “feel right.” That works for a while. Then they wake up inside a labyrinth of habits.
The company-as-code mindset is simply doing the founding work on purpose.
It suggests a future where the best-run companies will feel unusually light. Not because they have fewer humans, but because they have less sludge. Fewer meetings that exist only to reconcile ambiguity. Fewer heroics required to move work from one state to another. Less dependence on particular people as living databases.
They’ll look less like bureaucracies and more like well-designed systems: modular, observable, refactorable.
Which raises a question that’s almost embarrassing to ask out loud, because the answer seems so obvious once you’ve seen it:
If you can refactor code, why wouldn’t you refactor the company?




