Back to Blog

Fraud‑as‑a‑Service: Organized Cybercrime’s Underground Economy

Published: 11.11.2025

Updated: 11.11.2025

Author: Erik Vasaasen

As our financial lives move increasingly online, fraudsters are evolving their business models to keep pace. Enter Fraud‑as‑a‑Service (FaaS): an underground economy where organized cybercriminal groups sell ready-made fraud tools and kits to anyone willing to pay. Much like software-as-a-service in the legitimate tech world, FaaS provides on-demand access to sophisticated scam techniques, complete with customer support and regular updates. This post examines the rise of FaaS, how AI-driven impersonation and phishing kits are lowering the barrier to entry for cybercrime, and what banks, payment service providers (PSPs), and merchants can do to defend against this new wave of fraud.

The Evolution of Fraud‑as‑a‑Service

Fraud has always been about innovation. Criminals are quick to exploit new technologies and platforms, but in the past, an aspiring fraudster needed significant technical skill to create malware or phishing schemes from scratch. Today, however, FaaS has turned fraud into a turnkey business model. On dark web forums and Telegram chat groups, would-be criminals can buy plug-and-play scam kits and services, often with tiered subscriptions, user manuals, and even 24/7 support. In other words, cybercrime has become “professionalized”, structured like a startup ecosystem where specialized providers supply tools for phishing, malware, fake identities, money laundering, and more. Finding these marketplaces is quite straightforward: Google.com might not list them, but how about yandex.com? And while “fraud pack” might not be listed, try looking for something like “фрод-пак” instead.

This organized approach means even a novice can launch large-scale fraud operations. FaaS offerings include everything from phishing website templates and business email compromise (BEC) kits to stolen credit card data, money mule networks, and remote access trojans (RATs). There are services to bypass two-factor authentication, to crack passwords, and to launder proceeds via cryptocurrency. Some FaaS vendors operate across multiple channels and websites to advertise their wares, constantly evading law enforcement while reaching. Some quick searches even found fraud toolkits advertised on Instagram.

AI: A Game Changer for Impersonation and Phishing

One of the biggest evolutions in FaaS is the integration of artificial intelligence. Off-the-shelf AI models and custom “dark” AI tools are being weaponized by criminals to automate and amplify their scams. In our post The Fraud of Today and Tomorrow from 2022, we predicted that machine learning and AI advancements would enable fraudsters to impersonate people and launch more convincing social engineering attacks. That future is now here.

Today’s FaaS kits often include AI-driven impersonation capabilities, such as deepfake voices and real-time text generation, allowing scams to be executed at a scale and believability previously unimaginable. For example, one proof-of-concept tool called ViKing demonstrated how an entire phone-based “vishing” scam could be fully automated by AI, combining speech recognition and voice synthesis to carry out live scam calls without human intervention. In controlled trials, this AI caller cloned voices from short audio samples and held fluid conversations, convincing over half of unwitting participants to hand over sensitive information. It’s not hard to imagine such a tool becoming available as a ready-made service on criminal forums in the near future.

Meanwhile, AI text generators are busy enabling phishing and BEC (Business Email Compromise) attacks. Tools such as ChatGPT have sophisticated protections against being used for fraud, but it is easy enough to find “decensored” versions of popular models you can run locally. These models will quite happily help you with basically anything that can be extrapolated from its training data, without any limitations. In addition, there is a constant flow of new tools for scraping LinkedIn and news sources to inject personalized details, making scam emails more authentic and targeted (e.g. referencing a real vendor or project by name). These AI tools allow a single fraudster to generate hundreds of customized scam messages in minutes.

Visual and audio deepfakes add another layer. Deepfake video kits are emerging that let attackers impersonate executives or colleagues on video calls in real time. In one 2024 case in Hong Kong, criminals used deepfake video of a company’s CFO and several others  to trick an employee into a $25 million wire transfer All of this is offered as part of FaaS kits or services. In essence, AI has become a force multiplier: fraud attacks can be launched in bulk, across channels, with a level of personalization that would be impossible to do manually. A single well-crafted AI tool can be leveraged by dozens of fraudsters simultaneously.

New Tech, Old Tricks: Parallels with Traditional Fraud

For all the high-tech tools involved, it’s striking how FaaS-enabled scams still mirror classic fraud schemes at their core. The technology has changed, but the con artistry fundamentals remain the same, just executed on a larger stage.

Take the fake invoice scam, a long-running con targeting businesses. In the analog days, fraudsters would mail or fax a phony invoice hoping a busy accountant would pay it without scrutiny. We discussed this in our Fraud Around the World post, noting how criminals even impersonate CEOs to authorize bogus payments. In the FaaS era, that concept lives on through tools like the “Business Invoice Swapper.” This malware quietly sits in a compromised email account, detects genuine invoice emails, and swaps out the bank account details to reroute payments to the fraudster’s mule account. The entire scam is automated: legitimate vendors send real invoices, which are intercepted and altered in transit, fooling companies into paying the wrong recipient. It’s the old fake invoice trick, now supercharged by AI and malware. The damage can be huge because the fraud often isn’t noticed until vendors follow up on missing payments.

Similarly, confidence scams (a.k.a. social engineering or romance scams) have been around forever: con artists exploit trust by posing as someone they’re not, whether it’s a long-lost relative, a love interest, or an “investment guru.” In Fraud Around the World, we highlighted the rise of fake dating profiles pushing phony crypto investments, a textbook confidence scam adapted to modern times. FaaS has taken this to another level: scammers can deploy AI chatbots to romance dozens of victims concurrently, maintaining realistic back-and-forth conversations without tiring. Deepfake profile pictures and even voice notes add authenticity. The emotional manipulation at the heart of the scam hasn’t changed: victims are still persuaded to send money or crypto to someone they trust, but now a single fraudster can con many more people in parallel with far less effort, thanks to AI-driven automation.

The grey-Market Services Enabling Fraud

An important aspect of the FaaS discussion is the grey area of services that are legal or semi-legal, but facilitate fraud. Not everything a fraudster uses is explicitly illegal on its face. Some tools exist in a murky middle ground, exploited by criminals under the radar. Two notable examples are cryptocurrency tumblers and bulletproof hosting.

Crypto tumblers (mixers) are services that take in cryptocurrency from users and “mix” it by shuffling it among many accounts, ultimately paying out to a new address. The goal is to obscure the origin of the funds, basically laundering crypto by breaking the traceable chain on the blockchain. Tumblers can be used for privacy reasons by law-abiding individuals, which is why they aren’t outright banned in most jurisdictions. However, they are a favorite tool for cybercriminals; studies have shown that a significant share of illicit crypto funds, especially from hacks or ransomware, flow through mixers. This puts tumblers in a legal grey zone. Regulators have started to crack down as well. The U.S. Treasury even sanctioned one popular mixer in 2022 for facilitating North Korean hackers.

Bulletproof hosting is another grey-market service that underpins a lot of cybercrime infrastructure. These are web hosting providers (often located in regions with lax enforcement) that explicitly or tacitly allow illegal or unethical content on their servers. A normal hosting company will shut down your site if it’s discovered to be a phishing page or command-and-control server for malware. Bulletproof hosts, by contrast, promise to ignore complaints. They won’t take down your phishing site even if banks and victims report it, and they’ll resist law enforcement requests for as long as possible. Some operate from former military bunkers or offshore locations, adding to their mystique. While providing hosting is legal, knowingly catering to criminal clientele is what places these services in a grey area. They often justify themselves under notions of “privacy” or “anti-censorship,” but in reality they enable cybercriminals to remain online and effective. Many FaaS schemes rely on bulletproof hosts to keep their malicious infrastructure (phishing sites, malware download servers, etc.) running without interruption.

Other examples abound: encrypted messaging apps that refuse lawful interception, “anonymous” VPNs and proxies, even freelance marketplaces where hackers offer services under the guise of penetration testing. Individually, these tools might have legitimate uses, but together they form a shadow infrastructure that empowers FaaS. Tackling fraud therefore isn’t just about banning the obvious criminal acts; it also means addressing the grey-market facilitators. For instance, international efforts are underway to regulate crypto mixers under anti-money laundering laws, and authorities have seized servers of bulletproof hosts when possible.

Fighting Back: Defensive Strategies for Banks, PSPs, and Merchants

Facing this fast-evolving FaaS landscape can feel daunting for financial institutions and businesses. However, there are concrete steps that banks, payment providers, and merchants can take to level the playing field:

1. Collaborative Defense and Intelligence Sharing: No bank or company can fight modern fraud alone. Just as criminals collaborate, so must the defenders. This means sharing fraud intelligence across institutions and industries, and even with law enforcement. Banks and PSPs are increasingly forming consortiums to pool data on confirmed fraud patterns, suspicious accounts, and emerging scam types. For example, if one bank identifies a mule account or a new phishing kit targeting its customers, that intel should be rapidly shared so others can block related activity. Increased information sharing is a topic in both the new Anti-Money-Laundering directive (AMLD 7) and in DORA.

2. AI-Driven Detection and Analytics: To counter AI-enabled fraud, defenders should likewise embrace AI and machine learning for fraud detection. Encouragingly, about three-quarters of financial institutions report using AI in their fraud prevention efforts already. These technologies can analyze vast streams of transaction data, login behavior, device fingerprinting, and more to spot anomalies in real time. For instance, AI models excel at detecting when a user’s behavior suddenly deviates from their norm (potentially indicating an account takeover) or when hundreds of fake accounts exhibit similar patterns. Modern fraud AI can also recognize the telltale signs of automated attacks, such as perfectly typed login credentials (no typos) entered at superhuman speed, or subtle device identifiers that suggest the use of emulators and bots

3. Strong Customer Authentication (SCA) and Zero Trust: One of the most effective bulwarks against many FaaS-enabled attacks is strong authentication for customers and employees. Europe’s rollout of PSD2 Strong Customer Authentication requirements has significantly reduced certain fraud rates by requiring dynamic two-factor or multifactor checks for transactions]. Banks and merchants globally should continue to push for phishing-resistant authentication methods. This is where we at Okay can help you. Use biometrics, hardware security keys, or app-based cryptographic signatures, rather than outdated SMS OTPs or static passwords. In our What Does Future‑Proof Security Mean in the Age of AI and AGI? post, we emphasized the importance of a Zero-Trust mindset: assume no channel is secure and no request is genuine until verified. Practically, this means layering controls like device reputation checks, geolocation matching, and transaction risk scoring on every sensitive action. It also means designing internal processes (like verifying a sudden payment instruction from the CEO) to require out-of-band confirmation. With deepfakes able to imitate voices and faces, organizations might implement verification codes or safe words for high-level authorizations. Strong authentication is not foolproof, especially against social engineering, but it can stop many automated or mass-scale attacks cold and force the fraudsters into riskier, manual efforts.

4. Embrace Adaptive Security and Resilience: Finally, businesses should aim to be resilient in the face of fraud incidents. Despite best efforts, some attacks will get through. What matters is limiting the damage and learning quickly. This includes measures like setting lower thresholds for suspicious transaction alerts (to catch small test transactions by fraudsters), having rapid response playbooks for containing account breaches, and offering customers protections like guarantees or insurance against certain fraud losses. From a strategic standpoint, investing in solutions and partnerships that prioritize agility is wise. As we noted in our Future-Proof Security discussion, security teams should not rely on static one-size-fits-all defenses; instead, they should cultivate a proactive mindset that anticipates new forms of attack.

Sign Up for Our Newsletter

Unlock updates, insights, and exclusive content delivered to you.

Conclusion

The emergence of Fraud-as-a-Service marks a shift in the cybercrime world. Organized fraud rings have effectively productized and scaled fraud to a larger degree than before, offering it to anyone with an internet connection and ill intent. The integration of AI into these services has only accelerated the threat, enabling mass impersonation, automated social engineering, and lightning-fast exploitation of security gaps. We truly are witnessing fraud of today and tomorrow converge: age-old scams delivered through cutting-edge technology.

Yet, it’s important to remember that defenders are not powerless. By understanding how FaaS operates and anticipating the fraudsters’ moves, banks, PSPs, and merchants can adapt their defenses accordingly. The fight against fraud in the age of AI will require the same level of innovation, collaboration, and agility that the attackers are demonstrating. It’s a continuous battle, but with strong authentication, intelligent detection, industry cooperation, and user awareness, we can raise the barriers high enough to blunt the impact of these as-a-service fraud tools.

In the end, fraud is a societal problem as much as a security problem. As always, Okay is here to help organizations navigate these challenges, with solutions designed to counter both the fraud of today and whatever tomorrow brings. Stay safe, stay informed, and let’s outsmart the fraud economy together.