
Cybercrime experts at Money20/20 Asia urge financial institutions to abandon static security rules and adopt continuous, AI-driven defences as deepfake fraud reaches industrial scale.
The global fraud landscape has crossed a threshold that would have seemed implausible just a few years ago: scam operations have grown into a multi-billion-dollar industry that now surpasses the entire global drug trade in revenue, according to cybersecurity and digital identity experts speaking at Money20/20 Asia 2026 in Bangkok on Wednesday.
The stark warning came during a panel session titled How Cybercriminals Are Targeting Fintech and What's Next, which closed the second day of the conference on 22 April.
The discussion brought together Carolyn Fox, director of Trust and Safety at TELUS Digital, and Niki Luhur, chief executive officer of VIDA Digital Identity, in a conversation moderated by Joseph McGuire, head of Digital Labs at Mastercard.
From a guy in a hoodie to industrial parks
The picture painted by the panellists was a far cry from the popular image of a lone hacker.
Luhur described a dramatic escalation in the sophistication and scale of cybercriminal operations, noting that deepfake attacks — barely distinguishable from genuine content just two years ago — had by 2025 become the dominant attack method, with virtually all identity fraud attempts using AI-generated imagery.
More alarming still, these deepfakes now embed what Luhur described as "adversarial noise" — a data science technique specifically designed to defeat automated fraud detection systems.
"Not only are they using an AI model, they've got a data science team behind them, intentionally knowing what detection techniques are happening and developing techniques to evade that computer vision detection model," he said. "These are full-blown industrial parks."
He added that the human cost of these operations had become deeply troubling, pointing to reports of people being trafficked across Southeast Asian borders into forced labour within scam compounds.
"People here in Bangkok are literally getting picked up for a fake job, driven across the border, and held as slave labour. That's insane — and I would have never imagined that this is a reality."
Fox, meanwhile, highlighted how AI had fundamentally lowered the barrier to entry for social engineering fraud.
"You have these massive factories of people preying on people around the world — but now it's supercharged with AI. So you don't need a massive factory. It can be one guy in a basement somewhere turning this out."
Cybercrime has no favourite target
A central theme of the discussion was the indiscriminate nature of modern cyberattacks. Luhur was emphatic that no institution — regardless of size — is immune.
"Cybercrime and cybersecurity is, in a sense, democratic, because they don't care who you are or what size of institution you are. They just exploit the vulnerability. If you left door A open, they're going to hit everyone who left door A open," he said.
He warned that the emergence of more powerful AI models would make vulnerability scanning near-instantaneous and continuous.
"Whatever exposure and vulnerability you have, it's going to come out — and it's not a matter of a year. You know what we're talking about now."
Fox noted that fraud does not begin on financial platforms.
"The cybercriminals are platform and institution agnostic, and it starts on non-financial institution platforms — dating apps, social media," she said.
The fix: connect the dots, ditch the static rules
Both panellists were unequivocal that the industry's current defences are inadequate — and that the solution lies not in more technology in isolation but in better-connected systems and human oversight.
Luhur pointed to a structural flaw that criminals are actively exploiting: siloed security infrastructure within financial institutions.
KYC teams, onboarding systems, authentication platforms and transaction monitoring tools frequently operate independently, with no shared data or unified command.
"If you can just connect the time when a customer comes in and the time when money flows out of that person's account and connect their face, their device, and their biometrics—you're going to be in a lot better shape," he said. "You're going to solve most of your problems by doing something that's honestly relatively simple. Not easy, but simple."
He was equally pointed in his criticism of legacy fraud detection tools.
"Most financial institutions are still on systems with engineered static rules — that's the reality — and you need to upgrade."
Fox reinforced the argument for human intervention, warning that AI, despite its strengths in pattern recognition, cannot grasp intent or adapt to context without human input.
She recalled an incident in which a client's AI system incorrectly flagged thousands of legitimate account applications in Latin America as fraudulent because utility bills in the region commonly carry advertisements that the system mistook for suspicious activity.
The problem was only caught because customer support staff — operating in a separate silo — began receiving complaints.
"Having that human in the loop and making sure that your humans are talking to each other is very important," she said.
Regulators must get specific
The panellists also took aim at regulatory frameworks that they argued have been too vague to be effective. Luhur cited Indonesia's experience, where financial industry bodies had lobbied for light-touch, principles-based oversight – a position he now regards as a mistake.
"You can't just say it has to be safe. You have to be pretty detailed about what 'safe' means — what low risk, medium risk, and high risk mean and what types of tools and standards you need to apply," he said. "When you need the whole infrastructure to change, the regulator has got to have some teeth."
He pointed to two more prescriptive regulatory models as examples worth emulating: the Philippines' Anti-Financial Account Scamming Act (AASA), which mandates transaction monitoring and behavioural analytics across all financial institutions and fintechs; and the Monetary Authority of Singapore's move to require continuous penetration testing and vulnerability assessments.
"Certain regulators are already being prescriptive because it's already a massive problem, and the industry needs to move," Luhur said.
On the question of AI-powered defensive tools, Luhur argued that institutions need not wait for bespoke solutions.
Running existing Large Language Models against publicly available cybersecurity frameworks, he said, could already expose a significant number of vulnerabilities in any organisation's systems — and at a fraction of the cost and time of traditional manual penetration testing.
"It's scary to find out all of the holes you have in your system — but wouldn't you want to know the holes before something happens, as opposed to being completely blindsided?"