An exclusive article by Fred Kahn
Artificial intelligence and machine learning tools are rapidly transforming Anti-Money Laundering and Countering the Financing of Terrorism (AML/CFT) compliance. Institutions are deploying complex models for customer risk scoring, transaction monitoring, sanctions screening, and detecting behaviour patterns consistent with illicit finance. But with that power comes increasing regulatory demand for explainability. Regulators in the European Union, the Financial Action Task Force, and in the United States are all pushing for transparency, accountability, and auditability in AI/ML-driven systems in AML. Without explainability, financial institutions risk legal liability, operational failure, and erosion of trust.
This article explains the emerging regulatory landscape, why explainability is now essential, what risks institutions face if they fail to explain decisions, and practical steps to ensure compliance.
Table of Contents
The EU AI Act and High-Risk AI Systems in AML
The European Union passed the Artificial Intelligence Act (EU AI Act, Regulation (EU) 2024/1689) which came into force on 1 August 2024. It sets out a legal framework for AI systems, with special obligations for โhigh-riskโ AI systems. Using AI for AML tasks such as transaction monitoring, customer due diligence, sanction screening are frequently treated as high-risk because they affect fundamental rights, financial crime prevention and financial stability.
Under the AI Act, providers of high-risk AI systems must fulfil multiple transparency obligations: they must maintain technical documentation, retain logs, ensure human oversight, record how decisions are made, explain how inputs map to outputs and produce traceable audit trails. Article 13 requires that high-risk AI systems are designed to be transparent so deployers (those implementing the AI) can interpret output and use systems appropriately. artificialintelligenceact.eu+2IAPP+2
Deployers have duties under Article 26: they must use systems as instructed, ensure input data is relevant, monitor system performance, assign competent human oversight, and suspend use if the system poses unexpected risks. artificialintelligenceact.eu+1
Also under the Act, providers must keep documentation (technical, governance, risk-assessment, logs) for specified durations (for example technical documentation for 10 years, automatically generated logs for at least six months) so that national authorities can verify compliance. IAPP+2ISACA+2
These requirements align closely with AML compliance imperatives, because many AML models that flag suspicious transactions or customers are exactly the sort of use case that falls under high risk: they affect individualsโ financial rights, potentially trigger investigations, freezing of assets, reputational harm.
FATF Guidance and Global Expectations
The Financial Action Task Force (FATF), the international standard setting body for AML/CFT, has long been watching the interaction of emerging technologies with AML systems. In its 2021 report โOpportunities and Challenges of New Technologies for AML/CFT,โ FATF specifically calls out explainability and interpretability of AI and machine learning as key challenges. It notes that opaque models can make it difficult for regulated entities to satisfy their obligations, including showing how decisions are made, what factors were considered, and to explain outcomes such as why a customer was flagged or cleared. FATF+1
The FATF also expects that any implementation of new technology must be consistent with risk-based approaches, with human oversight, auditability, and good governance. The technology must not be a โblack boxโ that institutions cannot explain in reviews, to supervisors or courts. FATF+1
More recently FATF signals, in updated guidance and mutual evaluation expectations, that countries should ensure that their supervisory frameworks enable scrutiny of AI tools in AML, including requiring institutions to document model risk, performance, false positives, false negatives, and ensure transparency in the decision logic.
U.S. Regulatory Pressure and Liability Risks
In the United States, although there is not yet a unified federal AI regulation expressly governing AML, multiple agencies and policies are converging toward requiring transparency, explainability, and accountability in AI/ML systems used in financial crime prevention.
Federal financial regulators (including bank supervisors) have issued model risk management and supervisory expectations that AI/ML tools must be governed, documented, validated, with oversight by qualified personnel. For example the OCC (Office of the Comptroller of the Currency) treats AI/ML as model risk, demanding robust governance, third-party risk management, and clarity on how decisions are made. Mayer Brown+1
The U.S. Treasury Departmentโs reports and requests for information highlight that financial institutions using AI for AML functions will need to show how inputs, the modelโs logic, thresholds, and outcomes are interpretable and auditable. U.S. Department of the Treasury+2ComplyAdvantage+2
State laws and consumer protection laws (unfair or deceptive practices, data protection, discrimination) also can come into play if an AI model discriminates or unfairly treats customers, or fails to explain adverse decisions. Courts or regulators may hold institutions liable if customers cannot understand why they were flagged, cleared, or denied service.
There is also exposure under liability frameworks: if a financial institution uses an opaque model and that leads to regulatory sanctions, fines for non-compliance, wrongful asset freezes, reputational harm, or legal claims from customers or counterparties. In particular product liability, contractual liability, and regulatory enforcement risks are increasing. U.S. examiners have flagged โaudit trailsโ and โafter-the-fact transparencyโ as essential in regulatory filings and examinations. FDIC+1
Why Explainability Matters: Liability, Trust, and Operational Efficiency
Without explainability, financial institutions face multiple categories of risk.
Regulatory risk: Failure to meet obligations under the EU AI Act, under FATF standards implemented locally, under U.S. regulatory expectations can lead to fines, sanctions, orders to change practices, debarment, or worse. For institutions operating across borders, compliance mismatch can trigger investigations in multiple jurisdictions.
Legal liability: If a customer is wrongly flagged as a suspicious person, or incorrectly cleared, and suffers harm (financial, reputational, loss of opportunity), that customer might have cause for legal claim. Opaque ML models make it difficult for institutions to provide a defensible explanation. Judicial or regulatory processes often demand explanation of how decisions were reached.
Operational risk: Black box models tend to require more manual override, rework, investigations, confusion among compliance staff, inefficiency, high false positive or false negative rates. Lack of explainability makes it hard to tune a model, monitor drift, assess bias, or understand failures.
Reputational risk: Clients and markets expect fairness, transparency; failure to explain decisions, especially adverse ones, undermines trust. Regulatory or media scrutiny over unexplained AI behaviour can damage brand.
Financial cost: Beyond fines, cost of remediation after failure, investment in litigation, oversight, remediation of model issues, adjustments to technology, hiring experts, etc.
Explainability also supports fairness, equity, data protection rights (where applicable), helps demonstrate compliance to internal or external auditors, and ensures the institution is resilient under supervision.
Steps for Compliance and Best Practices
To manage these risks and meet regulatory expectation, institutions using AI-ML in AML should take several steps:
- Classify AI systems early as high risk or not, based on whether they influence customer treatment, sanctions, monitoring, customer onboarding etc. If in the EU, check whether the system falls under the EU AI Actโs definition of high risk.
- Implement robust model risk governance: Set up oversight bodies, involve legal, compliance, technical experts. Ensure model validation and testing. Include bias detection, fairness assessment, scenario testing.
- Maintain detailed documentation and audit trails: For each model, record technical documentation (data sources, feature set, architecture, thresholds), decision logic, performance metrics, errors. Retain logs of inputs, outputs, decision reasoning. In EU context, providers must keep documentation for 10 years, logs for at least six months.
- Ensure human oversight and interpretability: Have people who understand both AML domain and ML to review flagged decisions, review cleared decisions. Provide explanations in ways that non-data scientists can understand.
- Testing, validation, and monitoring throughout model lifecycle: test pre-deployment, monitor drift, false positives/negatives, adjust thresholds. Post-market monitoring as required by EU AI Act.
- Training and awareness for staff: both compliance staff and those who deploy or maintain models. Also ensure staff know what explanations are required internally and externally.
- Legal review and alignment: Check contracts with vendors, model providers. Ensure provisions for transparency, rights to audit vendor code or data, liability clauses. Understand national implementation of FATF standards and local regulatory requirements.
- Prepare for regulatory scrutiny: be ready to explain why a customer was flagged or cleared, what factors led there, what data was used, what thresholds were applied, whether any manual overrides occurred, etc.
What May Happen If Institutions Cannot Explain Decisions
If a financial institution cannot provide a clear, transparent explanation of why a given customer or transaction was flagged (or cleared), several adverse outcomes may follow:
- Regulators may impose enforcement actions, including fines or requisitions to change or stop use of the AI system.
- Legal claims by customers wrongly harmed: reputational damage, financial compensation, injunctive relief.
- Violations of fundamental rights: in EU context, lack of transparency might infringe rights under data protection law, fairness, equal treatment.
- Inability to pass audits or mutual evaluations under FATF standards: regulators may find contraventions in supervision reports or in mutual evaluation peer reviews.
- Operational breakdowns: high false positives may overwhelm teams, or false negatives may allow illicit behavior to go undetected, resulting in AML violations.
- Risk of being deemed non-compliant with EU AI Act (e.g. failure to satisfy obligations under Article 13, Article 18, Article 26), which can carry heavy administrative penalties.
Lessons From Early Cases and Regulatory Trends
Some institutions have already faced regulatory scrutiny when they lacked explainability in their AI/ML systems. While specific case names are often confidential or settled without public detail, regulatory guidance and examinations make clear that examiners expect documentation, audit trails, human oversight, performance monitoring.
Regulators are moving from encouraging best practices to enforcing mandatory requirements. The EU AI Act is the most concrete example. FATF continues to make transparency, auditability and interpretability part of its requirements. U.S. regulatory bodies are increasingly treating AI systems in AML like other models subject to model risk management regimes.
Public policy trends show increasing interest in liability frameworks for AI use, aligned with consumer protection, privacy, data protection, and fairness laws.
Closing Thoughts On the Regulatory Imperative
Explainability in AI for AML is no longer optional. Across jurisdictions, institutions must be able to articulate why a customer was flagged or cleared, how their model functions, what data informs it, what thresholds and rules were applied, how human oversight is implemented.
Those who fail to meet this standard expose themselves to regulatory, legal, operational and reputational risk. Those who embed explainability as a core design principle will be better positioned to manage risk, satisfy supervisors, maintain trust, and deploy advanced AI safely.
Related Links
- EU Artificial Intelligence Act (Regulation (EU) 2024/1689) text on transparency obligations and high-risk systems
- FATF โOpportunities and Challenges of New Technologies for AML/CFTโ report
- U.S. Treasury Departmentโs โArtificial Intelligence in Financial Servicesโ report
- OCC Supervisory Expectations for AI and ML in model risk management in banking
Want to know which solutions can be envisaged for your specific needs?
Access the full feature-based AML Solution Provider Directory here
Some of FinCrime Centralโs articles may have been enriched or edited with the help of AI tools. It may contain unintentional errors.
Interested in promoting your brand with us, or need help choosing the right vendor or advisory firm?
Email us at info@fincrimecentral.com โ we probably already have the solution youโre looking for.












