AI-powered deception accelerates as cybercriminals shift from money theft to identity takeover
ASIA Pacific is poised to become the testbed for global fraud innovation, with generative artificial intelligence (AI) armed as the default method of attack, with Malaysia not off the hook.
The fraud rate in Malaysia experienced the highest year-on-year (YoY) increase, with cybercriminals targeting online transactions using money mules, as well as investment fraud aimed at expatriates, retirees and young professionals, according to Sum and Substance (UK) Ltd’s (Sumsub) Fraud Exposure Survey 2025.
The report noted that users can expect deepfakes to expand into multi-modal fraud, combining video, voice and tampered telemetry to overwhelm liveness and behavioural checks, according to a report on global fraud.
Maldives and Malaysia were ranked top in Asia Pacific with the largest YoY deepfakes growth, with Maldives seeing a hefty 2,100% growth while Malaysia saw a 408% jump. They were followed by Mongolia, Thailand, Sri Lanka and Singapore.
“We are witnessing a fundamental shift in the nature of fraud. Generative AI has democratised deception, but it has also forced verification to innovate at a pace faster than ever before. What we’re witnessing now is not a rise in the levels of fraud, but instead, smarter and more deliberate attacks, with multiple layers of deceit,” said Sumsub co-founder and CEO Andrew Sever.

Sumsub provides a verification platform to help businesses manage user onboarding, comply with regulations and prevent fraud. It provides a single solution for know your customer (KYC), know your business (KYB), as well as anti-money laundering and fraud detection.
Some 32% in Asia Pacific have come across deepfakes online while another 24% are unsure, showing that synthetic media is now so convincing that many users cannot tell real from fake.
Indonesia stands out as a stable but persistently high-risk environment, the report said. “Its sprawling super-app ecosystems create a wide attack surface, where one compromised identity can unlock multiple services. Fraudsters continue to prize this environment even when crude fraud attempts are curbed, because the potential payoff of each breach remains exceptionally high,” it said.
In the Philippines, it said pressure persists, largely driven by remittance platforms, gaming accounts and social media-linked onboarding, all of which are frequent targets of fraud.
The report noted that regulatory response is catching up. It noted that India is pushing biometric requirements for fintechs, Singapore is preparing AI-governance rules that will affect fraud detection while Japan is expanding digital identity pilots but faces new risks as fraudsters target previously low-risk markets.
Clear Patterns
The data reveals a clear pattern: Most attacks still start with people, not systems. With phishing (61%) and weak passwords (30%) leading the list, human error and low digital hygiene remain the most exploitable weaknesses.
The dominant fraud outcome — social media (69%) and government portal (15%) account takeovers — underscores a shift in attacker priorities: Rather than targeting money directly, fraudsters increasingly seek to control identities and gain digital access.
At the same time, financial loss remains significant, with 34% reporting that funds were stolen and 24% tricked into sending money.
It noted that over 60% of Asia Pacific users already leverage virtual or disposable cards at least occasionally. This indicates a growing awareness of payment fraud risks and a willingness to adopt preventive measures.
However, it said the 40% who rarely or never use such cards point to a persistent trust and accessibility gap — especially among users who rely on traditional banking channels or lack exposure to fintech-driven solutions.
The report also highlighted that money mulling remained a hidden threat.

While nearly 80% of respondents recognise the term “money mulling”, the majority lack a clear understanding of its legal and financial consequences — revealing a dangerous gap between awareness and comprehension, it said.
Even more alarming, it noted that one in four individuals have been personally targeted for mule activity, suggesting that criminal recruitment is active and widespread in the region.
This highlights how fraud networks are scaling social engineering tactics to exploit financially vulnerable or unaware users.
The next frontier of fraud prevention will belong to those who can unite human insight, data intelligence and AI precision to build trust at scale, the report said.
Sophisticated Fraud
In previous years, the report noted that AI has primarily been used by fraudsters as a tool to forge IDs, edit documents or spoof liveness checks.
In 2025, it has evolved into something larger: A sophisticated fraud production ecosystem.
It outlined five scenarios.
Platforms like OpenAI’s advanced image generation tools now create IDs with near-perfect detail — replicating fonts, holograms and textures that once required specialist skills.
Big Tech companies have attempted to implement protection measures to combat misinformation and plagiarism, including adding watermarks to AI-generated text.

However, these watermarks are easily removable, allowing bad actors to pass off their own AI-generated images as genuine or steal them to use the watermark on any of their own AI-generated images. Next-generation text-to-video systems, such as Google Veo and OpenAI’s Sora and Sora 2, can render entire dynamic scenes from short prompts, complete with realistic facial micro expressions, lighting and depth.
These tools enable attackers to stage convincing deepfake liveness checks that mimic the movements and reactions of real people, making visual verification one of the most vulnerable layers of identity defence.
As AI swells to its highest adoption with businesses and consumers alike, Big Tech is vying to be the most innovative in this space.
After Google’s release of Veo 3, OpenAI introduced Sora 2, quickly followed by Google’s release of Veo 3.1 — signifying the escalating contest for the most realistic AI tools, thus accelerating the Sophistication Shift.
Fraud-as-a-service providers now bundle these models into ready-made production kits, enabling even low-skilled actors to generate industrial quantities of high-quality forgeries.
This marks the leap from AI as a helper to AI as the engine behind industrialised, scalable fraud.
It accelerates both quantity (millions of attempts still flood the system) and quality (more sophisticated, harder-to-detect attacks) — fuelling the Sophistication Shift.
Source: themalaysianreserve.com






