$227 billion. That is what IDC projected enterprises would spend on Artificial Intelligence (AI) in 2025. The money is flowing. The ambition is real. The results are not. MIT’s NANDA initiative found that roughly 95% of enterprise AI projects stall before reaching meaningful business impact. S&P Global reported that 42% of companies scrapped most of their AI initiatives in 2025, up from 17% the previous year. Only about one in four organisations can move an AI proof-of-concept to production.
The conventional explanation is bad data, unclear objectives, missing governance. Those are real. But they are symptoms. The underlying problem is structural, and it lives inside the delivery model that enterprises rely on to implement AI.
The technology services industry, the $400 billion global consulting ecosystem, operates on a model designed in the mid-1990s. An enterprise hires a services firm. Consultants arrive, they spend weeks, sometimes months, learning the client’s business context, technology stack, compliance landscape, and institutional quirks. They build something. They leave. The knowledge walks out the door. The next project starts, sometimes the same company, but mostly different team. Same learning curve. The enterprise pays what I call the learning tax - the invisible cost of rebuilding context from zero, project after project, year after year. Expertise does not compound, it doesn’t evolve, it resets.
For more than years, our firm operated within this model. We delivered projects, rotated teams, rebuilt context. We have been good at it, But we saw what our customers experienced, the repeated onboarding, the lost institutional knowledge, the frustration of explaining the same business rules to the third team in two years, not utilising the learning or mistakes of past, and we decided that was no longer acceptable. AI has collapsed build time. Large language models generate in hours what once took weeks. Code, documentation, prototypes, the mechanical work of software delivery has been compressed by orders of magnitude.
Understanding a client’s business rules, compliance requirements, security guidelines, guardrails, organisational best practices, and operational quirks still takes months. It is human work. It resists automation. And in the traditional consulting model, it resets every time a project ends or a team rotates. This creates a devastating inversion. The faster AI makes the build phase, the more the reset phase dominates the timeline. Context-building becomes the bottleneck. And because the delivery model resets context by design, the bottleneck is permanent. Simultaneously; the developer community in enterprise tech space is frustrated with code generation tools as the speed which it creates the code; the residual impacts on delivery, particularly in enterprise context, is much deeper and troublesome.
The reset pattern is not the result of bad intentions. It is the result of incentive architecture. Commercial structures rarely incentivise a services partner to look at these issues from a ground-up level. Over decades, these patterns became embedded in the commercial model, not by design, but by inertia. The system optimised for what it measured, and it measured outputs, not outcomes. MIT found that more than half of enterprise AI budgets flow to sales and marketing pilots, yet the highest return on investment (ROI) comes from back-office automation. The result is a natural drift toward visible, demonstrable projects even though the data consistently shows higher returns elsewhere. Enterprises are investing the most where returns are lowest, not because individual consultants lack integrity, but because the commercial structures don’t reward the kind of deep, compounding, context-rich work that AI success requires. This has to change now, particularly with the capabilities that we have at our persusal.
If you are an enterprise leader investing in AI, here is the filter:
- Does your technology partner get smarter every time you work together? If every project starts from zero, if context rebuilds with every new team, if institutional knowledge walks out the door at the end of every engagement, you are paying the learning tax. And more importantly - AI will not save you from it, AI will magnify it.
- Does your partner measure success by what you achieve, or by what they deliver? Features shipped and SLAs met are outputs. Revenue impact, decision speed, and risk mitigation are outcomes. Only one of those matters.
- Does your partner guide you to where the returns actually are? Domain-specific, back-office AI implementations succeed at roughly twice the rate of generic front-office pilots. A trustworthy partner steers toward impact, not demos.
I believe the industry needs a fundamentally new category of enterprise technology delivery. Not incremental improvement of the existing model. A structural break from it. I am calling this category compounding build. Compounding build is defined by a simple principle: every engagement must be business outcome driven, faster, secure, governed, and consistent and most importantly should make the next one faster, cheaper, and more accurate. Knowledge carries forward. Quality is systemic, not dependent on which team gets assigned. Decisions encode into the system rather than evaporating when a project ends and people move.
This is not custom build - which is tailored but slow, and where knowledge walks out the door. It is not Software as a Service (SaaS) - which is fast but generic, with no enterprise memory. It is not platform configuration - which is stable but rigid, unable to adapt at the speed enterprises need. Compounding build occupies the space none of these models can reach: High agility with high retention. Fast and intelligent. Custom and persistent. The architecture behind it is what I call the enterprise memory layer--a persistent layer of organisational intelligence that connects business intent, security controls, regulatory requirements, and execution decisions/paths so that AI operates within enterprise reality, not in a vacuum. Every engagement adds to this layer. Context is retained, not rebuilt. Governance is built in, not bolted on.
Consider the difference. In the traditional model, a first engagement takes six to nine months to reach production. The second engagement starts from scratch and takes roughly the same. In a compounding build model, the same first engagement reaches production in six to eight weeks--with a 30-40% productivity gain and enterprise memory initialised. The second engagement requires zero ramp-up because validated skills are reused and context is recalled instantly. By the third engagement, the system is self-strengthening, compounding ROI with a lower total cost of change every time.
This is not a product pitch. It is a structural necessity. The technology to enable this exists today. The question is whether commercial models and delivery practices will catch up before another cycle of enterprise AI investment is consumed by the reset.
The technology services industry stands at an inflection point. For firms that do not adapt, the consequences will be measurable: margin compression, client churn, and a steady downshift toward outcome-linked deals where the old revenue model no longer holds. Enterprises deserve partners whose expertise compounds rather than resets. The category now exists. It is called compounding build. And it changes the economics of every technology initiative that follows.
This article is authored by Ashwini Suman, founder & CEO, Kaara.