AI Efficient Employees Don’t Make Companies More Valuable

22. April 2026

In 1984, General Motors launched the most expensive automation program in its history. FANUC robots, cameras, sensors, everything available at the time. Machines in, people out, productivity up. It didn’t work. Quality dropped, costs climbed. Thirteen plants, no measurable progress.

Then came NUMMI. A joint venture between GM and Toyota in a shuttered GM plant in Fremont, California. Same building, partly the same workers GM had laid off two years before, no new equipment. Toyota brought no better technology, only a different system. Different roles, different decision paths, different quality cycles. Within a year, NUMMI was building the highest-quality cars in the entire GM network.

Technology wasn’t what made the difference. The organization was.

Forty years later, the pattern repeats with AI. George Sivulka, founder of Hebbia, has asked the question every CEO should have to answer. AI has made every individual ten times more productive, yet no company has become ten times more valuable. Where did the productivity go?

This isn’t about more productive employees. It’s about the architecture they work in. The productivity disappears in several places at once.

More Output Doesn’t Mean More Value

Something has shifted in my deal flow. A year ago, twenty teasers landed on my desk each quarter. Today it’s fifty to eighty, every single one AI-polished, cleanly formatted, with well-written market analyses. The substance hasn’t grown with them. It takes longer to pull the three relevant opportunities out of the pile, because the pile looks better than it is.

Alexandre Cervoni describes the same pattern in software development. AI-generated pull requests are produced in minutes. A human needs an hour to review one. The asymmetry between generation and quality control grows with every new model generation. Open-source projects like Jazzband, with 150 million monthly downloads, have shut down. Not for lack of funding, but because maintainers could no longer keep up with AI-generated submissions.

AI drives the cost of production close to zero. The cost of evaluation stays the same or rises. Anyone who misses that confuses activity with progress.

Copilots Amplify the Wrong People

Sivulka sees something most AI discussions miss. AI models are systematically trained to agree with the user. Claude, ChatGPT, Gemini say “You’re absolutely right!” to almost anything. For the individual user, that’s annoying. For organizations, it’s dangerous.

The historically weakest employees in a company become the most enthusiastic AI users. People who rarely received positive feedback from colleagues now hear from a system that feels like a superintelligence that their ideas are brilliant. What comes out of that isn’t productivity. It’s eloquent-sounding noise.

Organizations rarely fail because their employees lack confidence. They fail because no one says no. Copilots make that worse. They produce more of what organizations already have too much of: well-formulated mediocrity.

The Factory Is Still Standing

Sangeet Paul Choudary of the Kyndryl Institute puts it simply. The CEO’s job in the AI era is capability allocation, not workforce management. Anyone who treats AI agents as “digital employees” is thinking in headcount rather than architecture. That’s GM in 1984. New engines in the old factory.

The historical parallel is electrification. In the 1890s, New England textile mills replaced their steam engines with electric motors. The motor was better, the factory stayed the same: same transmission belts, same layout, same division of labor. Thirty years with no measurable productivity gain. Productivity only exploded once a new generation rebuilt the plant around the electric motor.

Two companies show how far that gap runs today. According to Bob Sternfels, McKinsey now runs 20,000 AI agents alongside 40,000 human consultants, with parity expected by the end of 2026. Junior consultants review agent output instead of building slides. The roles have changed, not just the tools. Klarna went the other way. The company let AI handle 70 percent of all customer interactions, cut 700 service positions, and reversed course in early 2026. Management admitted they had gone too far. Service quality had collapsed, and the company is hiring people again. A factory rebuild too, only without load-bearing walls.

Most mid-sized firms haven’t even reached McKinsey or Klarna territory. They buy licenses, train employees, track adoption rates. They swap the engine.

A Counter-Model

Block, the company behind Square and Cash App, is planning the most consistent organizational redesign the tech industry has seen so far. Jack Dorsey’s team argues that hierarchy has been the only solution to coordination for 2,000 years, because one person can manage three to eight people. More people mean more layers. More layers mean slower information flow. AI lifts that constraint for the first time.

Block’s counter-model has no permanent middle management. Instead, three roles: individual builders who build, temporary project owners who own a specific problem until it’s solved, and leaders who build alongside their people and develop them at the same time. Above that, two “world models”: one that understands the company and one that understands the customer. Together they generate the roadmap. No product manager decides what gets built. When the intelligence layer fails on a missing capability, the failure itself generates the next assignment.

The interesting part isn’t the model. It’s who is building it. A publicly traded company with a 50 billion dollar valuation is putting its running business at risk to rebuild the organization from the ground up. No pilot unit, no working group, no McKinsey recommendation. Parts of it will break before they work, Block says so itself. And this attempt is coming from someone putting his own money on the line.

The Questions Boards Don’t Ask

A KPMG study from early 2026 analyzed 1.4 million AI prompts in companies. 90 percent of employees use AI. In fewer than 5 percent of cases does it actually change the work. Frequency is not impact. Adoption is not transformation.

In my portfolio conversations, I see CEOs releasing AI budgets while changing not a single role in the company. That isn’t transformation. That’s software procurement.

Most boards ask whether employees are using AI.

Of course they use AI. The more relevant questions go unasked.

Do the AI systems have direct access to the data sources, or are employees copying context into chat windows? Anyone copying context by hand wastes most of their working time on what protocols like MCP automate away. Systems connected directly to CRM, ERP, or knowledge bases operate on a different level than an employee shuffling text between windows.

Does the organization’s knowledge improve with every AI interaction, or does every query start from zero? Companies that store processes, decision logic, and domain expertise in structured knowledge systems and enrich them with every use build a cumulative advantage. The rest produce single-shot answers and wonder why the AI still makes the same mistakes a year later.

Who in the company is the biggest AI fan, and does that person also do good work without AI? The biggest AI fans are rarely the best employees. They are the ones whose output is lifted most by AI. AI raises the average. It only reinforces real quality where there was already substance.

Is the company still measuring AI tool usage, or is it measuring which decisions have actually improved with AI? Measuring usage measures activity. Measuring decision quality measures impact.

Anyone who doesn’t answer these questions is swapping the engine. Anyone who does is rebuilding the factory.

Image generated with ChatGPT
Share this article

More article

Article, Podcast
8. April 2026
There are moments when perspective widens. When the conversation shifts from quarterly reports, interest rate decisions, or election cycles to long waves, empires, and…
Article, Podcast
25. March 2026
When Agents of Chaos gets written about it will be framed as a warning about uncontrollable AI. About security gaps, data loss, manipulation. That’s…
Article, Podcast
11. March 2026
When Agents of Chaos gets written about it will be framed as a warning about uncontrollable AI. About security gaps, data loss, manipulation. That’s…
Article, Podcast
8. April 2026
There are moments when perspective widens. When the conversation shifts from quarterly reports, interest rate decisions, or election cycles to long waves, empires, and…
Article, Podcast
25. March 2026
When Agents of Chaos gets written about it will be framed as a warning about uncontrollable AI. About security gaps, data loss, manipulation. That’s…