Why can't our dev team vibe code as fast as me?
Product managers and CEOs, who don't even know how to code, can now code faster than their dev teams. What's going on?
Since CEOs and PMs got a taste of vibe coding, I keep getting the same frustrated question: “Why does this still take weeks?”
I get it. You’re in Cursor, you’ve got Claude Code or Codex as your copilot, and you just built a pretty solid appointment scheduler in an afternoon. You’re not even a developer! So when your engineering team comes back with a three-week estimate for basically the same thing, you’re thinking: Are they just not using AI right?
Yes. They are. But something subtle changed that both sides missed, and it’s making your dev team look slower than they actually are.
From the developers’ perspective, their estimates still feel accurate. They’re hitting their timelines. The old estimation techniques still work. Three weeks ago they said “three weeks,” and here we are, shipping on time. So what’s the problem?
The defense that doesn’t work
The standard developer responses to “why so slow?” all sound like excuses now:
“Did you read all the vibed code?” (Sure, a few more hours.)
“What about security, logging, monitoring?” (Okay, add a day.)
“Where’s CI/CD?” (Another day, fine.)
Even if you price those in generously, we’re still not looking at weeks of full-time work. So the question stands: if AI makes coding faster, why aren’t dev teams getting proportionally faster?
Because we forgot to notice what changed about scope.
Pre-vibe vs post-vibe: what actually changed
Before vibe coding came along, there was a constant tension between dev teams and product teams over scope creep. We’d wrestle. Product wanted polish; engineering forced descoping to hit dates. Every week we’d argue about whether some nice-to-have was worth pushing the deadline. We’d say things like:
“Are you sure that’ll help us sell more seats?”
“Will users actually use that?”
“Can we just ship the core feature first and see if anyone asks for that?”
These days? We just .. say yes.
A product manager floats the idea that the appointment scheduler should have three different user-driven configuration options. Instead of pushing back, we do the math in our heads: That’s maybe a few more hours of vibe coding. Easier to say yes than argue. So we say sure.
The developers aren’t lying about their estimates or padding them. They genuinely don’t realize they’re building more than they used to. The old estimation techniques (story points, t-shirt sizes, past velocity) still produce accurate timelines because they’re unconsciously pricing in all the extra scope they’re no longer fighting. Neither side noticed that the battle lines moved.
The friction to add features dropped significantly, so the scope quietly expanded to fill the available time.
Let’s say we’re building that appointment scheduler, and we say it’ll take three weeks. At the end of three weeks:
Pre-vibe world: The system works, but just. It has exactly the features that match the most pressing business needs. Recurring appointments? De-scoped. Appointment reminders? Coming in v2. Mobile version? Soon. The PM feels satisfied: “Cool. Got the scheduler done.”
Post-vibe world: We have notifications. We have reminders. We have mobile. We have recurring appointments. We even added a little intelligence that notices when you’re making a similar appointment and helpfully pre-fills fields. The PM feels... basically the same: “Cool. Got the scheduler done.”
We forgot to even notice the 15 nice-to-haves that wouldn’t have been in this sprint three years ago!
The product manager asks, “Why aren’t we coding faster?” and we feel defensive because we are coding faster. We’re just also coding more.
Evidence that supports the intuition
I know this feels like anecdata, but the numbers back it up:
Most features don’t get used. Pendo analyzed real product usage and found around 80% of features are rarely or never used representing billions of dollars of build time diverted from actual value. Earlier Standish data put it at roughly 66%.
Scope creep is common and costly. PMI reported that 52% of projects experienced scope creep, and that that figure is on the rise.
AI does speed up coding… A 2025 study showed Google engineers completed an enterprise-grade task in 96 minutes with AI assistance, compared to 114 minutes without it, a 21% speed-up.
...but coding is the small slice. Atlassian’s 2025 DevEx report found that while devs save hours with AI, they lose similar time to organizational drag: finding info, unclear direction, cross-team friction. Coding is only about 16% of the week; the rest is everything around it.
And AI is not a universal accelerator. A 2025 study found experienced developers working in familiar codebases sometimes slowed down about 19% with AI assistants due to review and correction overhead. Useful, but not magic.
So, for most developers in most situations, yes, AI makes writing code faster. But the combination of (1) quietly enlarged scope and (2) all the non-coding work (discovery, decisions, reviews, integrations, QA, release, change management) eats those gains.
And even though, Atlassian’s report doesn’t mention my theory of increased scope, it’s a reasonable explanation for why so much time these days goes into engineering overhead. It’s always been the case that developers need to get requirements from PM, talk to other developers to ask questions about interfaces or how things work, do code reviews, and so much more. If the actual coding part of this cycle shrinks, then the frequency of all those other things will increase.
What to do instead of “just go faster”
Reduce sprint length. Sprint cadence should be carefully calibrated to the balance between distrating engineers and making sure they don’t veer too far off course. I used to like a one week cadence. In an AI accelerated world, it may be worth integrating everyone’s work and prioritizing the backlog twice a week.
Track the yes’s. Make scope visible. Keep a short list of “nice-to-haves” and ask weekly: which of these leads to revenue, retention, or learning now?
Produce according to demand. Align what you build to what customers will actually use or what will land more customers. Kill or delay the rest. It’s hard as a manager to be ok with developers waiting with nothing new to code for a couple of days, but it’s far worse for them to build unused features that don’t produce value and will later work against the organization by slowing velocity and adding maintenance overhead.
Protect the thin slice. Coding got faster; everything else didn’t. Invest in crisp requirements, slim approvals, paved-road CI/CD, and fast integration test suites so the non-code time doesn’t cancel AI’s gains. I like DORA metrics as a way to measure and encourage this.
Measure product, not output. Reward teams for adoption, retention, and sales impact, not story points or PR counts, which vibe coding can inflate without adding value.
Bottom line
I’m not saying don’t make beautiful, intuitive software where people notice the care and thought that went into the user experience. I’m saying not every idea is worth coding right now.
Vibe coding lowered the friction to say yes. Our discipline has to rise with it.
If we start noticing what we’re adding, and align scope to real demand, the org will finally feel the speedup AI already delivered.
Thanks for reading! Let me know if you have experienced this yourself or if you disagree in some way.
—Jon Christensen


