In Compliance, the AI race is on. But are we building on solid foundations? - Takeaways from the A-Team RegTech conference in London

  • November 11, 2025

As the financial world gathers annually, conferences like SIBOS invariably highlight the dynamism  between regulatory complexity, market fragmentation, and the relentless drive towards innovation. This year, three major themes, Payments, Artificial Intelligence, and Digital Assets, dominated conversations. Yet, beneath the buzz of new technologies and strategic shifts, a single foundational truth emerged: data standardisation is the indispensable factor that ties all these threads together.

The RegTech industry met up at the A-Team RegTech conference in London last week, and the topic on everyone’s mind was, unsurprisingly, artificial intelligence. From keynotes on innovation sandboxes to deep dives on agentic AI, the message was clear: the future of compliance is being redefined by intelligent technology.

But as the day unfolded, a more cautious and pragmatic narrative emerged. The promise of AI is running headfirst into the messy reality of fragmented data, legacy integration, and the absolute need for human oversight.

The conference confirmed what we've seen in practice: you cannot have an effective AI strategy without a rock-solid data and integration strategy. The industry is grappling with how to build this future on a foundation of trust, not hype. The way forward, it seems, is a convergence of AI, trusted data, and open standards, all underpinned by collaboration and transparency.

A "principles-based" sandbox: The Regulator's view

The day's keynote from the UK Financial Conduct Authority (FCA) set a fascinating tone. The regulator positions itself as an "AI innovation facilitator", and with good reason. It is actively building an AI sandbox, complete with synthetic data, to allow firms to safely test and experiment with new solutions under regulatory supervision.

This is a clear signal that innovation is welcome. However, it came with two critical clarifications.

First, the FCA confirmed it is not yet using AI to analyse its own reporting data internally. This is a telling admission. It shows that the regulator is exercising caution, understanding the validation and trust required before embedding AI into critical functions.

Second, and perhaps most significantly for global firms, the UK is not aligning with the EU AI Act. Instead, it is championing a more flexible, principles-based approach. This move avoids prescriptive, one-size-fits-all rules, but it places a greater burden on firms to prove their AI models are fair, effective, and auditable. Without a clear data trail, "explainability" becomes a significant challenge, if possible at all.

WhatsApp Image 2025-11-11 at 11.09.28 (3) WhatsApp-Image-2025-11-11-at-11.09-1 WhatsApp Image 2025-11-11 at 11.09.28 (4) WhatsApp Image 2025-11-11 at 11.09.27

Agentic AI: Enhancing, not replacing, the human

The panels on agentic AI (those more advanced systems that can reason, plan, and act) provided a glimpse into the near future. The consensus was a reassuring one: the goal is not to replace human workflows but to enhance them.

The most practical use cases discussed included using AI for automated testing of data and regulatory logic, or as a synthesis tool to help teams interpret market signals and generate alerts from vast information streams.

But here, the hurdles became clear. Every panellist agreed that data quality and integration remain the foundational barriers. Firms are still struggling to aggregate data effectively due to privacy and interoperability challenges. Furthermore, the risk of the "black box" nature of AI is all too real. Speakers warned that models can hallucinate or overlook crucial ethical and legal nuances. The need for human-in-the-loop oversight is, and will remain, non-negotiable.

The GRC challenge: Why "Big Bangs don't work"

Moving from futuristic AI to the practical realities of today, the discussion on integrating Governance, Risk, and Compliance (GRC) systems brought focus back to an every-day practical level.

Panellists highlighted the persistent, costly reconciliation issues that plague firms, especially in complex products like commodities. The core lesson from the front line? "Big bangs don’t work". A strategic, phased approach to data integration is the only way to succeed.

This reality is also forcing a change in people and skills. The panel identified a rising need for data analytics expertise within compliance teams. It's no longer enough to throw requirements ‘over the wall’ to IT teams. Compliance professionals must be able to collaborate directly with technology and business functions, armed with data literacy.

This panel also raised a critical, often-overlooked point on vendor management: data continuity. Firms must ensure their contracts protect their data ownership and access, especially if a third-party vendor fails.

The surveillance blind spot: Off-the-shelf AI isn't enough

The surveillance panel provided perhaps the clearest example of AI's promise versus its limits. AI is already reshaping trade surveillance by helping to reduce false positives, prioritise alerts, and contextualise market changes in real time. In volatile markets, this is invaluable for helping teams focus on genuine risk.

The warning, however, was stark: off-the-shelf AI models simply do not work for surveillance.

This domain requires deep customisation and tuning. More importantly, explainability remains a critical requirement for both auditors and regulators. If you cannot explain precisely why an AI flagged one event and ignored another, the model is not fit for purpose. The most tangible benefits are coming from incremental, targeted use of AI, not wholesale automation.

WhatsApp Image 2025-11-11 at 11.09.27 (3) WhatsApp Image 2025-11-11 at 11.09.28 WhatsApp Image 2025-11-11 at 11.09.27 (1) WhatsApp Image 2025-11-11 at 11.09.29 (1)

The foundation: Where data standards make for safer AI

The themes from every single panel, covering risk, integration, explainability, and data quality, all point to one unavoidable conclusion. Before firms can safely embed AI, they must first build a data backbone that ensures interoperability, auditability, and trust.

This is where the foundational work on standardisation becomes the critical enabler for all future innovation.

The implementation of ISDA Digital Regulatory Reporting (DRR) and the FINOS Common Domain Model (CDM) are key. These frameworks are not just data models; they are the shared language the industry needs to collaborate effectively.

By creating structured, open standards, CDM and DRR provide the auditable, consistent data framework necessary for AI-driven systems. It is this standardisation that allows AI to operate consistently across different jurisdictions and systems. It is what makes "AI-ready" infrastructures a practical reality, moving them from a conference buzzword to tangible, resilient compliance tool.

Building compliance on trust, not hype

The A-Team conference left the industry with a clear directive. AI is undoubtedly transforming regulatory technology. But this transformation will only succeed if it is built upon trusted data, open standards, and robust human oversight.

From the FCA's principles-based sandbox to the industry's cautious embrace of agentic AI, the path forward is not a technology race. It is a mission to build a more transparent, collaborative, and interoperable compliance ecosystem. 

The AI revolution is coming, but it must be built on a solid foundation of data standards.

 

Blog Post

Related Articles

If you found this article interesting, here are some others that can expand your knowledge on the topic.

Scaling Regulatory Compliance: How a Global Investment Bank Leveraged CDM/DRR

October 3, 2025
One of the world’s top five investment banks scaled compliance across multiple jurisdictions by adopting CDM and...

Building a Scalable Future: How a $1.4 Trillion Fund Manager Adopted CDM

October 3, 2025
Discover how one of the world’s largest fund managers standardized its data with the Common Domain Model (CDM) to boost...

Derivatives Market Regulatory Compliance in 2025: Key Trends and Changes

February 6, 2025
As we look ahead to 2025, the derivatives market is set to experience another pivotal year of regulatory transformation.