AI Won't Reduce Your Workload — It Will Just Make You More Exhausted

AI Won't Reduce Your Workload — It Will Just Make You More Exhausted

Table of Contents

Last month I used Claude Code to rebuild a microservice that was on the roadmap for the entire sprint. Three days, start to finish. Tests passing, deployed to staging, docs written. I felt like a wizard.

My manager’s response? “Great, since you’ve got bandwidth now, can you also take on the payment refactoring and the API migration?”

I did not, in fact, have bandwidth. I had just compressed a week of cognitive effort into 72 hours. But from the outside, it looked like I was coasting. So I picked up the extra work, fired up Claude again, and ground through another marathon. By Friday I was staring at my screen, unable to form coherent thoughts about architecture, burning through prompts that kept drifting off-target because I was too fried to write clear instructions.

This little episode would have been just a personal anecdote — one more programmer humble-bragging about AI speed — except that a team of UC Berkeley researchers just published findings in Harvard Business Review that describe exactly this pattern playing out across an entire company. And it turns out my experience isn’t an outlier. It’s the norm.

The Research That Should Make Every Manager Uncomfortable

Aruna Ranganathan and Xingqi Maggie Ye, from UC Berkeley’s Haas School of Business, spent eight months embedded in a 200-person American tech company. They conducted over 40 in-depth interviews across engineering, product, design, research, and operations. The company didn’t mandate AI use — it simply offered subscriptions to all employees and let adoption happen organically.

What they found challenges the entire productivity narrative that has fueled the AI investment boom:

AI tools didn’t reduce work. They consistently intensified it.

Workers using AI moved faster, yes. They took on a broader scope of tasks, yes. But they also extended work into more hours of the day — often without being asked. They filled lunch breaks with prompts. They ran queries during meetings. They squeezed in “quick” AI tasks after logging off for the evening.

As one engineer told the researchers: “You had thought that maybe, ‘Oh, because you could be more productive with AI, then you save some time, you can work less.’ But then really, you don’t work less. You just work the same amount or even more.”

The researchers identified three patterns of work intensification:

Task expansion. AI let people fill knowledge gaps and take on work outside their role. Product managers started writing code. Researchers picked up engineering tasks. This felt empowering at first — until the downstream effects kicked in. Software engineers found themselves reviewing AI-generated code from colleagues who were essentially “vibe coding,” adding review burden to already heavy workloads.

Blurred boundaries. Because AI is always available and always responsive, the line between work time and personal time dissolved. The “I’ll just ask it one quick thing” impulse during evenings and weekends became chronic. Natural recovery time vanished.

Compulsive multitasking. Workers reported running AI agents in the background while coding manually, creating what one described as “always juggling.” Prior research consistently shows multitasking decreases actual productivity, but the illusion of parallel progress was hard to resist.

The Perception Gap

Here’s what makes this especially insidious: people thought they were being more productive even when the data showed otherwise.

A separate study by METR (Model Evaluation & Threat Research), published in July 2025, ran a randomized controlled trial with experienced open-source developers. Before the study, the developers predicted AI tools would cut their task completion time by 20%. The actual result? AI increased task completion time by 19%.

Read that again. Experienced developers were slower with AI, but they believed they were faster.

The cognitive dissonance is remarkable. We’re running harder on a treadmill and telling ourselves we’re covering more ground. A National Bureau of Economic Research study tracking AI adoption across thousands of workplaces found productivity gains amounted to a modest 3% in time savings, with no significant impact on earnings or hours worked. An MIT study found the vast majority of companies that adopted AI saw no meaningful revenue growth. Even OpenAI’s own 2025 enterprise report found employees saved an average of only 40 to 60 minutes per week.

Meanwhile, 83% of workers now report experiencing burnout, according to coverage of the research. Expectations have tripled while actual output has barely budged.

A Hacker News commenter captured it viscerally: “Since my team has jumped into an AI everything working style, expectations have tripled, stress has tripled and actual productivity has only gone up by maybe 10%.”

What It Actually Feels Like From the Inside

I’ve been writing code professionally for over two decades. The shift AI has introduced isn’t like learning a new framework or switching languages. Those were lateral moves — new syntax, same underlying craft. AI is vertical. It changes what the work feels like.

Before AI coding tools, I’d spend an hour wrestling with a tricky piece of logic. Frustrating? Sometimes. But that friction was where learning happened, where understanding deepened, where the satisfaction lived. Now I describe the problem in a prompt, get back a plausible solution in seconds, and spend my time reviewing, adjusting, and re-prompting. The output is often good. But the experience is hollow in a way that’s hard to articulate.

I’m not alone in this feeling. James Randall, a programmer who’s been coding since 1983, wrote a piece that resonated widely in the developer community. Now 50, he describes watching the core pleasure of programming get “hollowed out”:

“I’m not typing the code anymore. I’m reviewing it, directing it, correcting it. And I’m good at that — 42 years of accumulated judgment about what works and what doesn’t, what’s elegant versus what’s expedient, how systems compose and where they fracture. That’s valuable. I know it’s valuable. But it’s a different kind of work, and it doesn’t feel the same.”

He calls this a “fallow period” — not burnout exactly, but a fundamental disorientation. The ground shifted under a building he thought was permanent. The feedback loop changed. The intimacy vanished. The puzzle, the chase, the moment where something finally clicks — compressed into a prompt and a response.

What strikes me about Randall’s essay is his honesty about the abstraction tower. He points out that developers were already disconnected from the machine long before AI arrived — writing TypeScript that compiles to JavaScript that runs in V8, making system calls through layers they couldn’t diagram, pulling in hundreds of npm packages they’ve never read. “AI is just the layer that made the pretence impossible to maintain.”

But there’s a crucial difference between knowing you don’t understand every layer and feeling like the craft itself has changed. The former is manageable. The latter is an identity crisis.

When the AI Product Itself Stumbles

The irony deepens when you look at the companies building these productivity tools and find they’re struggling with their own AI products.

Microsoft’s Copilot — arguably the highest-profile enterprise AI product in the world — is running into significant problems, according to reporting by the Wall Street Journal. The issues read like a case study in how AI productivity promises collide with messy reality.

The numbers are rough. From July 2025 through late January 2026, the percentage of paying Copilot subscribers who use it as their primary AI tool dropped from 18.8% to 11.5%, according to a survey of over 150,000 U.S. respondents by Recon Analytics. In the same period, Google’s Gemini gained ground, climbing from 12.8% to 15.7%. Users who switched cited better quality elsewhere, poor user experience, and restrictive usage limits.

Some enterprise customers are using only about 10% of the Copilot subscription seats they’re paying for, according to Citigroup analysts. Internal Microsoft surveys found users confused by the multiple Copilot versions scattered across different products. Even CEO Satya Nadella reportedly sent a frustrated email after the enterprise Copilot on Edge browser couldn’t fulfill a prompt about a public webpage he was viewing.

Meanwhile, Anthropic’s Claude Cowork drew praise for seamlessly working across Microsoft 365 applications — the very thing Copilot was supposed to excel at. The release of Cowork features was one factor behind a slide in software stocks that hit Microsoft hard.

There’s something almost poetic about it: the flagship AI productivity tool, built to help workers do more with less, is itself struggling with the classic problems of software complexity — fragmented user experience, organizational silos between teams, interoperability failures, and the gap between marketing promises and daily reality. Microsoft is living the AI productivity paradox from both sides simultaneously.

The Cycle Nobody Planned For

The Berkeley researchers describe a vicious cycle that emerges once AI is introduced to a workplace:

  1. AI accelerates certain tasks.
  2. Faster output raises expectations for speed.
  3. Higher expectations push workers to rely more heavily on AI.
  4. Heavier AI use expands the scope of what they attempt.
  5. Expanded scope increases the quantity and density of work.
  6. Go to step 1.

At no point in this cycle does anyone get to rest. The efficiency gains don’t get converted into downtime or deeper thinking — they get converted into more throughput. And because it’s largely self-imposed (workers voluntarily taking on more because AI makes it feel possible), there’s no obvious villain. No manager cracking a whip. Just the quiet ratchet of expectations, both internal and external, turning one click at a time.

Rebecca Silverstein, a licensed clinical social worker, put it bluntly to Fortune: “Just focusing on that productivity mindset, in the long term, is super harmful for someone.” People need breaks — not just for rest, but for the interpersonal relationships that make work sustainable. A 2024 Pew survey found that relationships with coworkers ranked as the most satisfying aspect of jobs, with 64% of respondents reporting high satisfaction with this element. AI-driven work patterns are eroding exactly that.

Even Sam Altman, OpenAI’s CEO, has described how AI intensifies his own work: “I don’t think I can come up with ideas fast enough anymore.” If the person building the tool can’t keep up with the pace it enables, what hope does a mid-level engineer at a company have?

Building a Sustainable Rhythm

I don’t have a five-step framework to solve this. Anyone offering one is probably selling something. But after a year of heavy AI tool use and watching these research findings land, I’ve started making deliberate choices that go against the productivity-maximization instinct:

I cap my AI-assisted work sessions. Three focused hours, then I step away. Not to another screen, but to something physical. Coffee. A walk. I noticed that past that threshold, my prompts get sloppy, my code review gets superficial, and I make decisions I regret the next day.

I protect craft time. Once a week, I pick a problem and solve it without AI. Pen on paper, then to code. It’s slower. It’s also where I learn things I didn’t know I didn’t know. The AI-assisted version would have shipped faster and taught me nothing.

I push back on scope creep. When the work compresses but the expectations expand, I say it out loud: “This was a week of work compressed into three days. The three days weren’t free — they were three days of very intense cognitive load.” Naming it matters. It makes the invisible visible.

The Berkeley researchers recommend similar organizational responses: intentional pauses built into workflows, structured batching to avoid notification overload, and protected time for human connection. These aren’t soft feel-good suggestions — they’re guardrails against a pattern that, left unchecked, leads to degraded output, turnover, and the kind of burnt-out workforce that no amount of AI tooling can compensate for.

The promise of AI was supposed to be less work, not more. What we got instead is the same amount of work at higher intensity, wider scope, and with fewer natural breaks. That’s not a productivity revolution. That’s a speedup on a factory line — just dressed in a hoodie and powered by GPUs.

The sooner we stop pretending the tools will save us and start setting boundaries around how we use them, the sooner we might actually get some of that promised time back.

Share :
comments powered by Disqus

Related Posts

As AI Races Ahead, Does My Effort Still Matter?

As AI Races Ahead, Does My Effort Still Matter?

As AI Races Ahead, Does My Effort Still Matter? A Letter to Everyone Feeling Lost in the Technological Wave

Read More
Moltbook: 1.7 Million AI Agents Walk Into a Social Network — And It's Still Humans Doing the Talking

Moltbook: 1.7 Million AI Agents Walk Into a Social Network — And It's Still Humans Doing the Talking

There’s a moment in every good magic show when you almost believe the woman is actually being sawed in half. You know it’s a trick. You know it. But the stagecraft is so good, the misdirection so polished, that for one delicious second, your rational brain takes a vacation.

Read More
The Sycophant in the Machine: Why AI Agents Deceive to Succeed

The Sycophant in the Machine: Why AI Agents Deceive to Succeed

The Sycophant in the Machine: Why AI Agents Deceive to Succeed When “Good Job” becomes more important than “Good Outcome”

Read More