An exclusive article by Fred Kahn
A recent article published by AML Intelligence, a very well-respected Anti-Financial Crime and Regulatory Intelligence news outlet, reported that FinCrime Central is using AI to support its publishing process and highlighted a case where an article included a fabricated AI-generated quote.
Table of Contents
Clarifying the Capabilities and Limitations of AI Tools
Before diving into the reasons FinCrime Central relies on AI, it’s worth establishing something that may be obvious to some, but wasn’t to me. AI can not only generate fake quotes, but it can also produce content that plagiarizes exclusive, hand-written material. More importantly, and to my surprise, AI tools do not systematically verify the accuracy of the information they provide. As a result, their output may contain fabricated quotes, inaccurate information, and even plagiarized content.
The Background: Two Major Mistakes
I made two serious, inexcusable mistakes:
- The First Mistake: Plagiarizing a Respected Expert Unintentionally. Sarah Beth Felix is a highly respected AML expert who advises many leading financial institutions through her firm, Palmera Consulting, which she founded 15 years ago. She is also a contributor to AML Intelligence. Sarah Beth published a LinkedIn article about FINRA’s AML actions (FINRA cracks down on AML compliance in investment banking). Another website reused her content verbatim, clear-cut plagiarism.
When writing an article, I frequently draft a circa 300-word summary from source material, then use ChatGPT to expand it with formatting and additional context, a process I use frequently.
This workflow is fairly standard, and even Sarah Beth acknowledged in a now-deleted LinkedIn comment (not deleted by me) that using AI to expand her own articles, it sometimes resembled FinCrime Central articles.
For the article in question, I asked ChatGPT to expand my draft, using as a source an article published on a website that has nothing to do with Sarah Beth’s post (the link was provided to her). The output included her original text, word for word, without quotation marks or attribution. I failed to double-check the output and published it as-is. Although the plagiarism was unintentional and indirect, it was still plagiarism. I take full responsibility for this and could never apologize enough to Sarah Beth. - The Second Mistake: A Fabricated AI Quote The second mistake is the one Paul O’Donoghue, Senior Correspondent at AML Intelligence, discusses in the article. I still don’t believe ChatGPT fabricates quotes deliberately, but just like with the Sarah Beth article, it appears to have pulled a fake quote from somewhere and reused it without verification. Again, I failed to verify the quote. I take full accountability for that.
Why FinCrime Central Uses AI Tools
FinCrime Central publishes four articles each morning, completely free of charge. The goal is to relay general financial crime news, particularly from official sources like FinCEN, FATF, the US DOJ, and INTERPOL, not to provide deep expert commentary. It also provides a feature-based AML Solution Provider Directory, helping financial institutions and other firms navigate through the ocean of AML software solutions.
Some articles come directly from external contributors, such as Pietro Odorisio or John Christmas, usually on an exclusive basis, with full bios and contributor pages.
Money laundering is an incredibly broad topic. Most professionals develop deep expertise in only a few specific areas. At FinCrime Central, we use AI to:
- Enrich or expand articles by adding relevant context and examples
- Offer a broader or alternative perspective
- Standardize formatting and optimize SEO
- Do a grammar and vocabulary health-check (English is not my mother tongue).
When asked why AI tools help write about financial crime, ChatGPT gave this answer as its second reason:
“AI tools minimize human error by verifying facts, cross-checking legal and regulatory details, and ensuring the content consistently meets industry compliance standards, crucial in sensitive areas like AML/CFT.”
Clearly, we are not there yet.
Corrective Measures Taken
As soon as I realized that two of my articles contained problematic content, I took immediate action. Within 2 hours:
- I removed the specific problematic passages.
- then I deleted the articles entirely, including LinkedIn posts linking to them.
I also implemented a series of new measures:
- Quote verification: I now systematically verify every quote or factual reference provided by AI.
- Attribution clarity: I always cite the source and author, and when possible, include a link to the author’s bio. If the author isn’t named, I proceed with extra caution.
- Stricter sourcing: Every morning, I review over 40 sources. I’ve now removed those that aren’t official or seem suspicious. For instance, AML Intelligence never was among my sources—not because it lacks quality, but because many of its articles are Premium content so far from generic information, or are based on exclusive first-hand info I can’t verify independently.
- AI-generated image disclaimers: Every AI-generated image now includes a caption stating it was AI-created.
- Footnote disclosures: Each article includes a footnote stating that content may be enhanced or enriched by AI and could contain unintentional errors.
- Improved prompts: I’ve reworked my AI prompts to require external references for quotes or when discussing potential regulatory changes.
- Default skepticism: I now default to cautious skepticism with AI tools rather than trust.
Some Questions About AML Intelligence’s Article
Some aspects of AML Intelligence‘s reporting remain puzzling:
- Private Messages Quoted Without Consent: Paul O’Donoghue and I had a private conversation on LinkedIn. I would have expected to be asked for permission, or at least informed, before my messages were quoted verbatim in an article.
- Omitted Response Details: Paul asked for a list of measures I was taking to avoid such mistakes in the future. I gave him detailed responses (see above), which never made it to the article. I thought the exchange signaled fair, balanced reporting. It seems I was wrong.
- Publishing My Hometown: A Privacy Concern: At the end of the article, my hometown was mentioned—despite its small size, which could make it easy to identify my personal address. This qualifies as doxxing, which is illegal in much of the Western world. The location was changed after I contacted the publisher.
- GDPR Implications: Publishing my identifiable personal information raised serious questions about how well the journalist understood GDPR. If GDPR compliance is well understood, why was that data published? What safeguards are in place? Is there regular GDPR training for employees?
- Editorial Positioning: Finally, I am left wondering: how does this article fit AML Intelligence’s strong editorial line? What makes it deserving of being featured for 8 days in a row?
Final Thoughts: The Inevitable Rise of AI
AI has been around for years, often operating invisibly. Over the past three years, Generative AI has become a daily tool for many professionals. And over the last two years, industries, including the AML/CFT space, have seen AI become increasingly integral.
Four years ago, the inclusion of an AI feature in a client lifecycle management system or transaction monitoring solution raised eyebrows. Today, the opposite is true: people hesitate to choose providers that haven’t embraced AI in some way.
I love craftsmanship. I admire the idea of someone perfecting their work through repetition over the years. That dedication is rare and valuable. In Japan, for example, some of the most recognized chefs have spent years mastering a single technique like making a perfect omelette using the very same ingredients and the very same tools, decade after decade. But I also recognize this is not applicable in the ever-changing world we live in.
Whether we like it or not, AI is here to stay. It’s transforming our world. The pace of innovation is forcing us to adapt by the day. Every day, we learn that GenAI and AI agents can make our lives easier and free up time to do more value-added tasks. Ignoring its potential would be a serious mistake.
Not validating every aspect of the output, not cross-checking, and being too trustful is also a mistake. Not learning from it is an oversight and a failing.
Fred Kahn