From Data Ambition to AI Impact: Why Most AI Projects Fail Before They Begin

An Interview with Vishu Singhal, Partner at Artefact

Artificial intelligence has moved from experimentation to expectation—but for many organizations, real business value remains elusive. While headlines often focus on model sophistication and generative breakthroughs, the reality on the ground is far more pragmatic: AI succeeds or fails based on the quality, relevance, and usability of the data that feeds it—and the decisions it is designed to influence.

In this interview, Vishu Singhal, Partner at Artefact, cuts through the hype to explain why most AI initiatives stumble long before algorithms are deployed. From data silos and governance gaps to misplaced success metrics and the dangers of “prototype thinking,” Vishu shares a grounded, experience-led perspective on what it really takes to turn data into durable AI-driven impact. His insights offer a timely reminder that AI is not a technology problem to be solved in isolation, but a business transformation challenge rooted in clarity, accountability, and execution.

How do you determine whether a company’s data is actually good enough to support an AI initiative?

We use a fitness-for-purpose framework rather than looking for perfection. Data doesn’t need to be flawless; it needs to be representative and accessible. We assess it across three dimensions.

First is signal-to-noise ratio: does the data actually contain the patterns needed to predict the outcome? For example, in a churn prediction project for a major telecom operator, ten years of billing data is useless if we lack recent clickstream data that captures customer frustration with a mobile app.

Second is operational relevance. Is the data reflective of current reality? Post-pandemic consumer behaviour has shifted so dramatically that 2019 data can actively damage a 2025 forecasting model. We also assess whether the data can be refreshed and monitored at scale in production.

Third is lineage. Do we know where the data comes from? If the data is a black box that cannot be audited, AI outputs will never gain executive trust.

In large enterprises, 70–80% of AI use cases fail not because models are too simple, but because critical signals are missing, fragmented, or poorly understood. A fast proof-of-concept using real historical decisions is often the most honest test of whether data is “good enough.”

 What data problems most often prevent AI projects from delivering business value?

The biggest value killers are rarely technical—they’re structural. One is data silos, or what we call “fragmented truth.” When marketing data lives in one cloud and supply chain data in another, AI cannot see the relationship between a promotion and a stock-out. Another is the prototype trap. Teams build models on clean, manually prepared data. When they move to production, they discover that live data arrives late, incomplete, or inconsistent.

The third is lack of business change enablement. Even accurate models fail if teams are not trained, incentivized, or equipped to act differently. AI creates insight; value only materializes when processes, roles, and behaviours change to absorb it.

 How do you translate a business question into a solvable data problem?

We move from ambition to decision. Before any modelling starts, we force clarity on three questions: What decision will change? Who will act on it? What happens if the model is right—or wrong?

From there, we reverse-engineer the data problem. Instead of asking “Can we predict this?”, we ask “What signal or intervention can we influence early enough to matter?” Often, that reframes the problem entirely.

The most successful AI use cases are rarely the most complex. They are the ones with clear decision boundaries, actionable time horizons, and data that already exists—or can be created pragmatically.

What role do data quality, governance, and ownership play in successful AI deployments?

They are not hygiene factors; they are strategic enablers. High-performing organizations treat data products like business products. Every critical dataset has an owner accountable for definitions, quality thresholds, and evolution. Governance is lightweight but explicit.

From an AI perspective, governance underpins trust. Executives and frontline teams adopt AI only when they understand where data comes from, how it is updated, and what its limitations are. In regulated industries, this is mandatory; in others, it’s the difference between pilots and real impact.

Which metrics matter most when judging AI success?

Model accuracy is rarely the metric that matters most at executive level. What truly counts is decision impact—revenue uplift, cost reduction, risk avoided, or time saved. We also track adoption: how often recommendations are used, overridden, or ignored, and operational stability such as data freshness and failure rates. Leading organizations also measure value durability. AI success isn’t a launch event; it’s a managed asset that must adapt as markets and behaviours change.

When working with limited or biased data, how do you decide whether to proceed or pause?

This is where experience matters more than expertise. Limited data doesn’t always mean stop, but it does require honesty. We ask three questions. Is the bias understood and measurable? Is the downside of being wrong acceptable? And is there a path to learning?

Often, the right answer is to redesign the use case, introducing human-in-the-loop, focusing on ranking rather than prediction, or using AI to highlight uncertainty instead of asserting certainty. The goal is not perfection, but responsible progress aligned with business reality.

Related articles

Where Desserts Take Flight: Inside the Opening of Medici by Trove

Shereen Shabnam We recently attended the opening of Medici by Trove at Dubai...

How Dubai Is Setting the Pace for Marketing Technology at Scale

Industry leaders from across marketing, technology, and enterprise gathered...

LEAVE A REPLY

Please enter your comment!
Please enter your name here