Abstract view of global data networks and connections
AI 24 March 2026 · 8 min read

The honest cost of AI-assisted development: what we've learned after 18 months

We've been using Claude Code and the Claude API in production delivery for over a year. Here's what actually changed — in cost, quality, and client relationships — and what didn't.

When we started integrating AI tools into our development workflow in late 2024, the conversation around AI-assisted coding was dominated by two camps: people claiming it would replace developers within months, and people dismissing it as glorified autocomplete.

Eighteen months later, we've found that the truth — predictably — is more nuanced. AI has genuinely changed how we work, what we charge, and how we talk to clients about delivery. But it hasn't changed the fundamental nature of the work itself.

What actually got faster

The most significant time savings have come from tasks that are well-defined, repetitive, and low-risk. Boilerplate code generation, unit test scaffolding, documentation drafts, and data migration scripts all benefit enormously from AI assistance.

For a typical Umbraco project, AI-assisted development has reduced our time on these categories of work by roughly 40%. That's real and significant. For a project that previously took 12 weeks, those efficiency gains might save 2-3 weeks of developer time.

The efficiency gains from AI aren't evenly distributed across a project. Architecture, client conversations, and testing strategy still take exactly as long as they always did — because they require judgement, not generation.

What didn't change

Architecture decisions, requirements gathering, accessibility auditing, security review, and performance optimisation — the high-judgement work that makes or breaks a project — haven't gotten meaningfully faster. These tasks require understanding context, weighing trade-offs, and making decisions that AI tools aren't equipped to make reliably.

We've also found that code review takes slightly longer when reviewing AI-generated code, because the reviewer needs to verify not just correctness but intent. Hand-written code carries implicit design decisions; AI-generated code can be technically correct but architecturally inconsistent.

What we charge (and why we tell you)

This is where our approach diverges from most consultancies. We break down our estimates into AI-assisted and hand-crafted categories, and we charge differently for each.

For AI-assisted work — boilerplate, scaffolding, initial implementations — we charge a lower rate that reflects the reduced human time involved. For architecture, review, and high-judgement work, we charge our standard rate.

The result is that clients see genuine cost savings (typically 15-25% on overall project cost) without wondering whether we're quietly pocketing efficiency gains. And they get to decide whether to invest the savings in additional features, more thorough testing, or simply a lower invoice.

The client conversation

We were initially nervous about telling clients we use AI in delivery. We worried it would undermine confidence in our work or reduce the perceived value of our expertise.

The opposite happened. Clients — particularly CTOs and IT directors — responded positively to the transparency. Several told us they were already assuming their vendors used AI, and appreciated that we were the only ones being explicit about it.

One CTO at a membership organisation told us: "I'd rather work with someone who tells me how the sausage is made than someone who pretends it's all hand-crafted artisan code."

What we'd recommend

If you're a technology consultancy considering AI integration, here's what we'd suggest based on our experience:

  • Be transparent with clients. The trust dividend of honesty is worth more than any short-term margin from hiding AI usage.
  • Don't reduce review time. AI-generated code needs more scrutiny, not less. Budget for this.
  • Track your actual efficiency gains. Don't assume the same savings across every project type. Measure it.
  • Invest the savings wisely. The time AI frees up should go into the work that requires human judgement — testing, accessibility, security, architecture.

AI hasn't changed what good software development looks like. It's changed the economics of getting there. That's valuable — but only if you're honest about it.

JD
Jane Doe
Technical Director at Infobox. Writes about technology strategy, AI in delivery, and the Microsoft ecosystem.
Related

More from Infobox