Data Trust: The Essential Foundation for Scaling AI in Financial Services
Is your data foundation strong enough for AI? As excitement around this innovation continues to grow, it is arguably the most important question that technology and IT leaders should be asking – especially in financial services and insurance, where accuracy is not just a matter of trust, but regulatory compliance.
Can you rely on the data that you’re feeding into your AI to be secure, accurate, up-to-date, and ethical? As we wrote in our recent blog, your AI can only be as good as the data that fuels it. Get those foundations wrong, and your AI initiatives won’t just be a disappointment: they could be a risk to your reputation and your bottom line.
Yet balancing that urgency to embark on exciting new AI applications and caution around data quality isn’t easy. On one hand, you need to maintain momentum and deliver results quickly to the board. On the other, you need to have confidence from a technical perspective that those outcomes are built on solid, accurate information.
In this article, we share three lessons on this subject from digital leaders across the global finance sector: Bank of America, Mastercard, and OneFamily. They explain why data clarity, governance, and discipline are the invisible engines behind AI success.
Lesson 1. The Value of Continuous Data Discipline: Insights from OneFamily
“ChatGPT can create fantastic recipes to cook with, but they may or may not make tasty meals.”
Graham O’Sullivan, Group CIO at OneFamily
In a recent interview with CIO Magazine, Graham O’Sullivan, Group CIO at OneFamily, captured the complexity and risk of deploying generative AI in financial services. Tools like ChatGPT, he noted, are capable of producing remarkable content, including impressive-looking recipes. But they can’t guarantee a good meal or, more importantly, a safe one.
For O’Sullivan, this analogy cuts to the heart of a bigger issue: trust. In regulated sectors like financial services, using generative AI isn’t just about what the technology can do, it’s about what it should do, and how safely and reliably it can do it. At the centre of that reliability is data. Not just high-quality data at the outset, but a long-term, embedded commitment to data stewardship.
“Traditionally, data quality has been a challenge,” he says in the interview. The real solution, he argues, lies in “the continuous discipline” of governance, a point that resonates strongly with those working to implement Master Data Management (MDM) across complex, legacy-bound environments. Without that discipline, the risk isn’t just technical failure. It’s a loss of business confidence, regulatory exposure, and missed commercial opportunities as AI initiatives struggle to move from pilot to production.
O’Sullivan also brings attention to the importance of aligning every technology initiative (whether it's AI, MDM, or otherwise) to a clearly defined business outcome. It’s not enough to implement new capabilities because they’re trending. The investment in data infrastructure and governance must be tied to a strategic goal: improving efficiency, enabling smarter decisions, strengthening compliance, or unlocking new services.
In a world rushing to build with AI, O’Sullivan’s advice is a timely reminder that success starts long before the model is deployed. It starts with the data, and with the systems, people and principles that manage it.
Read the full interview on CIO Magazine
Lesson 2. How a Governance Framework Drives AI at Scale: The Bank of America Approach
“Everything we do in AI goes through a governance process that has 16 different pillars.”
Hari Gopalkrishnan, Bank of America
If there’s one lesson financial institutions can take from Bank of America’s AI journey, it’s this: clarity around your data (where it comes from, how it’s governed, and whether it can be trusted) is non-negotiable. That clarity isn’t just a technical requirement, it’s what allows a business to scale AI safely, sustainably, and with confidence.
Hari Gopalkrishnan, the bank’s head of consumer, business, and wealth management technology, puts governance at the centre of this strategy. “Everything we do in AI goes through a governance process that has 16 different pillars,” he told CIO Magazine. Bias and transparency are part of that framework, but the foundation is broader: a deep, structural focus on how data is sourced, cleaned, and maintained. “We’ve been modernising our data plan,” he said. “Sourcing the data from the right place, making sure it’s clean, making sure it’s well governed, making sure it’s dealt with in a responsible way.”
That discipline is what has made it possible for Bank of America to grow its in-house AI agent, Erica, from a small model to one that now handles billions of customer and employee interactions, without compromising on trust or experience. It’s not the size of the model that matters most, Gopalkrishnan suggests, but the precision of its training and the quality of its inputs.
This clarity-first mindset is what Forrester analyst Brian Hopkins calls “pragmatic precision.” He describes the bank’s AI strategy as a masterclass in scaling digital engagement without cutting corners, particularly in an industry where trust is everything. “They’ve taken a more cautious path into gen AI,” he says, “but I’d argue it might turn out to be the smart play.”
In the rush to adopt generative AI, many organisations are still trying to retrofit governance and data structure onto systems already in motion. Bank of America’s approach is a reminder that if you get the foundations right, from MDM to analytics pipelines, AI becomes not just powerful, but explainable, repeatable, and safe.
Read the full interview on CIO Magazine
Lesson 3. Integrating Ethics and Fairness into AI Governance: A Lesson from Mastercard
“Some of the key components of our AI governance include rigorous model testing for bias and fairness, data responsibility standards.”
Rajesh Chopra, Mastercard
If you’re using AI to dig deeper into your data, accuracy can’t just mean getting the numbers right. It has to also include fairness, context and responsibility. That’s the message from Rajesh Chopra, Senior Vice President at Mastercard, who sees governance not as an afterthought to innovation but as the foundation that makes innovation usable, scalable and trustworthy.
In a recent interview, Chopra described how Mastercard is applying AI across multiple fronts, from fraud detection and cybersecurity to customer experience and decision support. At the heart of that activity is Brighterion, an AI platform that delivers real-time decisions by analysing data from across the payments ecosystem.
The technology itself is only part of the story. What makes it viable at scale is the way Mastercard has wrapped it in a clear governance structure, one designed not only to meet compliance needs, but to uphold values like fairness, privacy, and transparency.
That structure includes an internal AI Governance Council, cross-functional by design, and responsible for overseeing initiatives with significant financial and ethical implications. It includes formal testing for bias and fairness. It includes defined standards for data sourcing. Crucially, it maintains human oversight in high-impact AI decisions ensuring that the systems act as collaborators, not substitutes.
The message is clear: if AI is going to guide decisions, from fraud prevention to customer engagement, the inputs need to be not just technically clean, but ethically sound. That means establishing a trusted foundation, something Master Data Management supports by creating consistency, traceability, and control over critical data domains. It also means having the right frameworks in place to monitor and steer how that data is used.
For financial services firms moving toward AI adoption, the lesson here is a practical one. You can build fast—but only if you build responsibly. That starts with knowing what your data represents, how it’s being used, and who is accountable when decisions are made.
Read the full interview on Dataquest India
The takeaway: you need to master your data before you can rely on AI
AI is only as valuable as the data, and the governance, that supports it. For leaders in financial services, the message is clear: clean, connected data is essential, but it’s not enough. That data must be traceable, structured, and used with care. Master Data Management helps create the foundation and gives structure to data governance practices that ensures that data is used wisely. Together, they enable AI to move from promise to performance: safely, responsibly, and at scale.
Aligning AI initiatives with your company data strategy ensures clarity and trust.