The Genetically Modified Employee
There’s a pitch — sometimes said explicitly, more often just implied in every AI transformation roadmap — that goes like this:
“We can take the reasoning and execution capabilities of our best people, encode them into an agent, and deploy that capability at scale — without the overhead of managing actual people.”
No onboarding ramp. No retention risk. No performance that varies with mood, motivation, or personal circumstances. No six-month ramp-up followed by a resignation letter.
Just the capability. Extracted. Controlled. Scalable.
And to be fair — for some capabilities, this is exactly right. There are parts of every business process where the human was applying judgment, but it was structured judgment — judgment that follows patterns, draws on documented knowledge, and produces outputs that can be evaluated against clear criteria. An agent can carry this kind of reasoning effectively because the capability, while not purely mechanical, is separable from the person who held it.
But here’s what I think most companies are getting wrong. It’s not that anyone is explicitly claiming every human capability can be replicated by an agent — nobody in a serious boardroom is saying that. The problem is subtler: companies aren’t doing the hard work of figuring out where the line actually is. Which capabilities extract cleanly into an agent, and which ones are so deeply tied to being human that extracting them means losing them? That question requires a level of organizational self-awareness that most transformation roadmaps don’t budget for.
Without that line drawn carefully, what happens by default is that companies treat more and more capabilities as separable — not because they decided they all are, but because they never stopped to ask which ones aren’t. And that default is where the trouble starts. Because people aren’t a parts catalog. They’re an organism. And what companies are building, whether they frame it this way or not, is something closer to a genetically modified version of that organism — an agent that carries only the traits the business selected for, without the overhead of being an actual person.
That’s the metaphor I want to explore, because I think it reveals something most AI transformation conversations are missing entirely.
The Organism and the Modification
In genetic modification, the goal is elegant: identify a desirable trait in an organism, isolate the gene responsible, and transplant it into a new organism that expresses that trait without the unwanted characteristics of the original.
It works — sometimes. When the trait is controlled by a single gene with a clear mechanism, the modification is straightforward. You get what you designed for. Resistance to a specific herbicide. Production of a specific protein. These are traits that are separable — they exist independently enough that you can move them without breaking anything.
But geneticists learned early that most interesting traits don’t work this way. Genes rarely operate in isolation. A gene that produces a desired trait often depends on other genes to express correctly. Remove the supporting genes and the desired trait disappears or mutates into something unexpected. Worse, a single gene can influence multiple traits at once — a phenomenon called pleiotropy. Select for one thing and you accidentally affect three others. The genome is a system, not a parts catalog.
This is exactly what’s happening with AI agents in organizations. Not because companies are being careless — but because the entanglement is genuinely hard to see until you’ve already made the cut.
What Separates Cleanly
Let me be fair to the technology first, because the wins are real.
Some human capabilities in business are genuinely separable. A customer service agent who answers questions by looking up knowledge base articles, applying company policy, and generating a response? That capability — structured reasoning over documented knowledge — extracts cleanly. The human judgment involved follows patterns. The inputs are largely structured. The quality of the output can be evaluated against clear criteria.
Same for a junior analyst producing a market summary from public data sources. Same for a compliance checker reviewing documents against a regulatory checklist. Same for a scheduling coordinator optimizing calendar logistics across a team.
In each case, the valuable work was being done by a human, and it genuinely required some judgment — it wasn’t pure automation. But the judgment was modular. It didn’t depend on who the person was, what they’d lived through, or what they noticed in someone’s tone of voice during a meeting. It depended on knowledge, rules, and patterns that can be documented and transferred.
This is the single-gene modification. The trait separates. The transplant works. The agent produces what you designed for.
Companies are seeing massive gains here, and they should. This is legitimate.
What’s Actually Entangled
But here’s where the genetics stop being clean.
The customer service rep who handles the difficult edge case perfectly — the one that’s not in any runbook. She doesn’t handle it because she was trained on that scenario. She handles it because she’s been through hardship in her own life and recognizes emotional distress in a customer’s tone. Her empathy isn’t a professional skill she developed through training. It’s a human trait that happens to be valuable in this context. You can’t extract it without the life experience that produced it.
The operations manager who catches a cascading failure before it shows up on any dashboard. He doesn’t catch it because of monitoring tools. He catches it because he noticed a hesitation in someone’s voice during standup and went digging. That’s social intelligence — the same trait that also makes him political, sometimes difficult, and occasionally territorial about his domain. The detection ability and the personality aren’t separate traits. They’re the same trait expressing differently depending on the situation.
And if you think this is only about business processes, look at what’s happening in software engineering. AI can generate code, write tests, and implement well-defined patterns. Those are separable capabilities, and AI handles them well. But the senior architect who pushes back on a microservices migration — not because it’s technically wrong, but because she’s been through two failed migrations and knows this team doesn’t have the operational maturity to pull it off? That judgment is entangled with a decade of lived experience, organizational awareness, and the kind of intuition that comes from watching decisions play out over years. The same traits that make her invaluable in a design review also make her opinionated and expensive. You can’t extract the architectural judgment and leave the rest behind.
In each case, the valuable business capability is produced by a human trait that the business would prefer not to deal with. The empathy comes with emotional variance. The social intelligence comes with politics. The architectural judgment comes with the stubbornness and the salary.
This is pleiotropy. The gene you want and the gene you don’t want are the same gene. You can’t modify in one and out the other.
The Trade Companies Don’t Know They’re Making
This is where most AI transformation efforts are heading for trouble — not because anyone is being reckless, but because separable and entangled capabilities are genuinely hard to tell apart from the outside. And without a deliberate effort to distinguish them, the default is to treat everything as separable until something breaks.
From the outside, they look similar. The customer service rep who handles routine questions and the one who handles the devastating edge case both sit in the same department, carry the same title, and show up in the same headcount line. The operations manager who reads dashboards and the one who reads people both report into the same org chart. The structured judgment and the entangled judgment are bundled into the same role, the same salary, the same line item that a CFO wants to optimize.
So when the company deploys an agent, the initial results look great. Say the agent handles 85% of cases well — the separable ones. Speed goes up. Cost goes down. The business case validates itself.
But the remaining 15% — the cases that needed the entangled traits — get worse. Not in a way that shows up immediately on a dashboard. The edge-case customer who needed empathy gets a technically correct but emotionally deaf response and quietly churns. The cascading failure that the ops manager would have caught from a tone of voice goes undetected until it’s a full-blown incident.
The company doesn’t see this as a capability it amputated. It sees it as acceptable attrition. The math still works — the bulk of interactions automated at a fraction of the cost more than offsets the losses at the margins.
Until it doesn’t. Because those margins aren’t random. They’re load-bearing. Those edge cases, those catches, those moments of human perception — they’re often what kept the highest-value customers loyal and what prevented the most expensive failures. They’re disproportionately valuable precisely because they’re rare and hard.
I wrote in a previous piece about illegible knowledge — the kind that was never documented because it’s too intuitive, too contextual, or too politically sensitive to write down. The entanglement problem goes one step further: even if you could somehow document that knowledge, some of it only exists because of human traits that have nothing to do with the business. The knowledge is a byproduct of being human. Extract the knowledge, leave the human behind, and the kind of knowledge that came from lived experience stops regenerating.
How Companies Respond (And Why It’s Worth Watching Carefully)
When companies hit the entanglement wall, they don’t stop. They do something more interesting — and more concerning. They route around it.
They accept degraded quality in the entangled areas and redesign the process so the entangled capability is no longer needed. The customer interaction gets restructured so empathy matters less — the flow is simplified, the edge cases are funneled into scripted paths, the moments that used to need a human who gets it are engineered out of the experience. The operations workflow gets rebuilt so that information flows through systems instead of hallway conversations, reducing the value of social intelligence.
I’ve seen this starting to play out in support organizations that move from open-ended customer conversations to guided decision trees with agent handoffs. The tree eliminates most of the situations where empathy would have mattered — not by providing empathy, but by narrowing the interaction until it’s not needed. The customer never gets the moment where someone truly understands their frustration. But the interaction resolves, the ticket closes, and the metrics look fine.
This works, up to a point. But it’s worth being honest about what’s happening. The company isn’t solving the entanglement problem. It’s eliminating the conditions that made entangled traits valuable. That’s a different thing, and it has a cost that doesn’t show up in the same quarter it’s incurred.
Every process you simplify to remove the need for human judgment is also a process you’ve made less adaptive. The scripted customer flow works great until a situation arrives that the script doesn’t cover — and now there’s no one around who would have handled it intuitively. The systematized operations workflow surfaces information efficiently until the signal is social, not systemic — and now there’s no one who would have caught it.
You’re trading resilience for efficiency. And that trade looks brilliant right up until the moment it doesn’t.
The Gap Nobody Is Preparing For
Here’s what I think is actually true, and I’ll say it directly.
The extraction will keep going. It should. The separable capabilities — structured reasoning, pattern matching over documented knowledge, rule application at scale — are genuinely better in agents than in humans for most business contexts. Fighting that is like fighting the ERP wave or the BI wave. The value is real, and the organizations that capture it will outperform those that don’t.
But most organizations will get the trade wrong. They’ll optimize for the traits they can measure — speed, consistency, throughput, cost per interaction — and discover too late that the unmeasurable traits were load-bearing. Not all of them. But enough to matter. The companies that stumble won’t stumble because the technology failed. They’ll stumble because they couldn’t tell the difference between a capability they extracted and one they amputated.
And the hardest part won’t be the technology or even the trade-offs inside any single company. It will be the transition for the people involved. History tells us — and I wrote about this at length in “The Verifiability Ladder” — that technological disruptions eventually create more work than they eliminate. But “eventually” has always included a painful gap measured in years, sometimes decades. The people living through it don’t experience “eventual net job creation.” They experience displacement with no clear path forward.
The knowledge worker whose entangled capabilities made them valuable — the empathy, the social intelligence, the perception built from a lifetime of being human — those people need to become something the machine can’t be. But we don’t have language for that role yet, let alone a training program.
If you’re building these systems — and I am, so I’m talking to myself here too — every design decision about what to extract and what to preserve is a small vote for the kind of transition we’ll have. The question isn’t whether to deploy agents. It’s whether you understand, specifically and honestly, which capabilities in your organization are separable and which are entangled — and what you’re willing to trade.
The organism isn’t a parts catalog. The companies that don’t invest in understanding which parts of it are modular and which aren’t will get exactly what genetic engineers get when they ignore pleiotropy: an organism that expresses the trait they selected for and fails in ways they never anticipated.
The ones that understand the entanglement — that build hybrid systems where agents handle the separable and humans handle the entangled, where the boundary between the two is drawn with precision rather than optimism — those are the ones that will be left standing when the extraction settles.
The modification is coming. The question is whether you’re engineering with the organism or against it.
This is the third piece in a series exploring AI’s impact on how we build and run organizations. Previously: “The Verifiability Ladder” examined why AI targeted developers first, and “AI Can’t Replace What It Can’t See” explored the illegible knowledge that senior professionals carry. Next: what changes about architecture, testing, and operations when you build agentic systems.