The Trillion Dollar Question: Why Can’t 56% of CEOs Still Not Prove AI's Value?

Gartner projects that worldwide AI spending will total $2.5 trillion in 2026. However, according to PwC's latest Global CEO Survey, the majority of that investment can't be justified with actual business results.

PwC’s survey shows that 56% of CEOs report seeing neither revenue gains nor cost reductions from their AI initiatives. Think about that for a second. More than half of business leaders can't demonstrate any return from AI.

We're talking about thousands of companies, billions of dollars invested, and a fundamental disconnect between AI's promise and proven business impact. According to the survey of 4,454 CEOs across 95 countries, only 12% have achieved both increased revenue and reduced costs from AI. Nearly nine out often organizations are failing to demonstrate meaningful returns.

So the question is: why?

Why do some companies succeed with AI while most fail?

The survey reveals something interesting. There's a small group of companies – the "vanguard" – pulling ahead dramatically, while the majority are stuck in endless pilots and proofs of concept. The companies seeing real returns aren't just lucky. They're doing something different.

CEOs whose organizations have established strong AI foundations are three times more likely to report meaningful financial returns. What does "strong foundations" actually mean? Responsible AI frameworks, yes. Technology environments that enable enterprise-wide integration, sure.

But critically, and this is what most organizations miss: the ability to measure and demonstrate value systematically.

Why most AI investments fail the ROI test

The problem isn't AI capability. The technology works. What fails is how organizations think (or fail to think) about value generation from AI, especially right from the onset.

When we talk about value management and impact from AI, there are two parties involved – always. On one hand, you have the business teams. On the other, you have the data & AI teams. The business defines the problem and needs to do something with the output they get to actually generate value. The data team builds the solution. Both sides are necessary. Both incur costs.

Most organizations launch AI initiatives without clear success metrics, deploy models without performance baselines, and scale technologies without understanding their actual contribution to business outcomes. The result is innovation theater, not strategic transformation: visible activity and impressive demos that create the appearance of progress, but without measurable impact or lasting business change.

There's another layer to this. Even when you build a great model – let's say a churn prediction that's 90% accurate – value doesn't automatically happen. The data team delivered what they promised. If the business team doesn't know how to use those predictions effectively or they just don’t use the model at all? The value you expected doesn't materialize.

This shared responsibility in bringing use cases to life means both teams should jointly own the success or failure. Most organizations don't structure measurement this way.

Also read: Skin in the Game, Dear Business: Why AI Value Demands Shared Ownership

What does return on AI investment actually mean?

The same rigor for every other significant business decision should also be applied to AI investments. No more AI investments for the sake of hype.

The concept is straightforward: establish clear baseline metrics before implementing AI, define what success looks like in business terms with a value hypothesis, measure actual performance against those targets, and continuously optimize based on real results.

What makes AI different is that you need to design your value assessment methodology and data and AI impact management in a way that considers the shared responsibility between business and data and AI teams. Make individual contributions transparent – both during value qualifications and the early stages of use case discovery.

The implementation requires thinking about three distinct dimensions:

  1. Quality - Did the data team deliver what they said they would? Is the model accuracy where it needs to be? This is the technical team's contribution to value generation.
  2. Adoption - Is the business actually using the solution? Are they using it effectively to drive real decisions? You can build the best churn prediction model in the world, but if nobody looks at the dashboard or acts on the insights, nothing changes. This is the business team's contribution.
  3. Impact - What's the actual business outcome? Revenue increase, cost reduction, risk mitigation. This is jointly owned.

When one of these fails, you can demonstrate where it broke down. The data team can say, "We delivered a90% accuracy model as promised. We're good." Or the business can show, "We acted on 95% of the predictions, the model quality wasn't there. "This transparency is critical for learning and for fair attribution of successor failure.

Why the gap will keep growing

PwC's research suggests 2026 is shaping upas a decisive year for AI. The gap between companies demonstrating real returns and those still experimenting is starting to show up in confidence levels and competitive positioning.

This gap will accelerate. Companies proving AI's value gain board support for additional investment, while those unable to demonstrate impact face increasing skepticism. Success compounds, uncertainty breeds hesitation.

Organizations that master return on AI investment (ROAI) measurement now will make bold, confident decisions about AI deployment. They'll know which initiatives to scale, which to sunset, and where to invest next. Those without this capability stay stuck in an endless cycle of pilots that never quite prove themselves worth scaling.

From proof of concept to proof of value

The path forward requires a mindset shift. Stop asking "Can AI do this?" Start asking "Should we deploy AI here, and how will we prove it's working?"

A few practical things this means:

  1. Start with business outcomes, not technology capabilities. What specific business problem are you solving? What's the size of that problem in monetary terms? What's realistic to recover? If you have 20% churn, you're probably not getting to 0%. Some churn is normal in any market but also look at industry benchmarks. Maybe you can realistically get to 15% and that 5% reduction is your value ceiling.
  2. Establish baselines before delivery. You can't prove improvement without knowing where you started. Be specific about assumptions. If you're building that churn prediction model, how will the business use it? Sending vouchers to at-risk customers is more expensive than sending a nice email, probably more effective too. These assumptions directly impact both your expected value and your costs.
  3. Build measurement into the solution architecture from day one. ROAI can't bean afterthought. You need to track quality metrics, adoption metrics, and business impact from the start.
  4. Create clear accountability for results. Someone must own the numbers and be responsible for demonstrating value. This ownership should reflect that shared responsibility – both business and data & AI teams should report jointly on use case portfolio performance.
  5. Prioritize using a framework that captures the full picture. Not just impact and effort but also quality, adoption, and organizational risks. When you prioritize this way early on, you're already thinking through what needs to happen for value to materialize – before you build anything.
Also read: AI Without a Value Hypothesis is Just an Experiment

The competitive advantage of measurable AI impact

Only 30% of CEOs are confident about revenue growth over the next 12 months, down from 56% in 2022. In this environment, every investment must justify itself. AI is no exception.

Companies that will win are those that candefinitively answer: "What are we getting for our AI investment?"They'll make smarter deployment decisions, secure ongoing funding forsuccessful initiatives, and avoid wasting resources on projects that don'tdeliver.

They'll build organizational confidence in AI as a strategic tool rather than an expensive experiment.

Making ROAI the standard, not the exception

The PwC survey shows most companies can't prove AI is working. That's a different problem than "AI doesn't work," which requires a different solution.

ROAI brings clarity and accountability to one of the most important technological transformations we're seeing.

The question facing every CEO: can they prove their AI investments are paying off? For 56% of business leaders, the honest answer today is no.

It doesn't have to stay that way.

Organizations that evaluate and implement these systematically – that make shared responsibility transparent, that measure before they build, that treat AI with the same rigor as any other capital investment – will be the ones still standing when 2026 wraps up.

They'll be the ones actually getting value from AI instead of just talking about its potential.

Ready to get value out of your AI investments? We’ll show you how.