The Rise of AI-Enabled Crime: Exploring the evolution, risks, and responses to AI-powered criminal enterprises
Artificial intelligence (AI) has revolutionized numerous industries, enhancing productivity and innovation at an unprecedented speed. From increasing access to healthcare, to climate modeling that helps mitigate the impact of weather events, to improving efficiency and security in our workplaces — AI is enabling better, more sustainable outcomes.
However, this same transformative technology is also being leveraged for criminal purposes, posing significant threats to global security and societal stability. Malign actors are increasingly leveraging AI to carry out hacks and conduct fraud, creating deepfakes for extortion and to spread misinformation, and to carry out cyber attacks at massive scale. As AI technology becomes more sophisticated, so will the ways in which criminals leverage it.
This report explores the stages of maturity in AI-enabled crimes — horizon, emerging, and mature — illustrating how AI is amplifying criminal capabilities by removing traditional human bottlenecks. We’ll look at real-world examples and documented cases of AI in criminal activity, and examine the severity of these challenges. And we’ll explore strategies for preventing and mitigating AI-enabled crime, emphasizing the importance of technical solutions, regulation, education and collaboration.
{{horizontal-line}}
AI enables criminal growth by removing human bottlenecks
Artificial intelligence has swept through the business world, transforming industries and rewriting the rules of efficiency and scale. Companies have adopted AI in phases — starting with off-the-shelf tools like ChatGPT to assist with tasks such as drafting responses, translating documents, or summarizing data. Over time, organizations have started to develop proprietary AI systems to automate entire workflows.
But these advances are not limited to the licit world. Around the world, criminal enterprises are racing to harness AI’s potential in much the same way. At first, these organizations turned to AI to aid their human-led criminal efforts, translating phishing scripts into multiple languages or scanning vast codebases for exploitable vulnerabilities. These early applications mirror legitimate uses of AI, but are weaponized to devastating effect.
The next frontier for criminals is autonomy. Just as corporations strive to automate tasks, cybercriminals can develop AI agents capable of operating completely independently. These agents could identify and exploit vulnerabilities without human oversight — executing complex objectives, such as hacking critical infrastructure (e.g. water treatment plants).
This evolution doesn’t just change the scale of criminal operations, it fundamentally reshapes them — making them faster, more efficient, and alarmingly difficult to detect or counteract.
Cybercriminals are leveraging AI in new ways
Cybercriminals, scammers, and nation-state actors are increasingly leveraging AI to amplify their malicious activities. Some of the major ways applications they have found include:
Automation of attacks
AI-powered tools enable the automation of phishing campaigns, creating highly convincing messages at scale. AI also allows malware to adapt dynamically to evade real-time detection.
In March 2024, the US Treasury Department published a report on the current state of artificial intelligence related to cybersecurity and fraud risks in financial services. The report, which synthesizes interviews with 42 financial institutions, specifically calls out the use of Generative AI by existing threat actors to develop and pilot more sophisticated malware — giving them complex attack capabilities previously available only to the most well-resourced actors and helping less-skilled threat actors to develop simple but effective attacks.
Deepfakes and synthetic media
Criminals employ deepfake technology to impersonate executives or public figures, and AI-generated images and voices that make scams harder to detect — enabling them to facilitate high-value fraud such as business email compromise (BEC), extortion, and social manipulation.
In November 2024, US Treasury’s FinCEN released its “Alert on Fraud Schemes Involving Deepfake Media Targeting Financial Institutions,” highlighting an “increase in suspicious activity reporting by financial institutions describing the suspected use of deepfake media in fraud schemes targeting their institutions and customers.” In its March 2024 report on “AI Cyber and Fraud Risks for Financial Institutions,” the Department asserts that scammers have looked to AI to create better deepfakes that mimic voice and other human features in more believable ways.
Additionally, AI is used by fraudsters to enhance abilities to create synthetic identities — that is, the creation of fake identities using a composite of personal information from various real people in order to open bank, credit card, cryptocurrency, and other financial accounts.
For more on Treasury’s 2024 “AI Cyber and Fraud Risk for Financial Institutions," listen to TRM Talks with primary author Todd Conklin, US Treasury Deputy Assistant Secretary of Cyber Chief AI Officer.
Enhanced cyberattacks
AI algorithms optimize ransomware operations by identifying the most critical data or systems to encrypt for maximum leverage. Nation-state actors are also using AI for advanced cyber-espionage, bypassing traditional security measures.
{{horizontal-line}}
Law enforcement, policymakers, and national security agencies must understand criminal adoption of AI
To effectively counter the accelerating threat of AI-enabled crime, law enforcement, policymakers, and national security professionals need frameworks to closely track the criminal adoption of AI so they can deploy targeted countermeasures.
At TRM, we classify AI adoption into three tiers to help stakeholders prioritize their responses and resources effectively.
- Horizon: Significant AI use is possible and "on the horizon" in this crime type, but has not yet been notably detected. Criminal organizations may be exploring applications, but implementation remains limited.
- Emerging: AI is being actively used to streamline operations, though humans still play a predominant role in decision-making and execution. Examples include automating phishing campaigns or enhancing existing fraud operations.
- Mature: AI-based activity dominates this crime type, with AI systems surpassing human-driven efforts in scale, efficiency, and sophistication. These systems operate with minimal human input, autonomously executing complex and high-impact criminal activities.
Horizon phase: Theoretical applications of AI
The horizon phase is marked by mostly theoretical applications of AI in criminal activity, with high potential for disruption within the next year or two. Criminal organizations operating in this stage of maturity may already have AI tools and systems in place, but have not yet implemented them at scale.
Horizon phase AI crime typologies
Proliferation financing
North Korea (DPRK) has increasingly relied on cyberattacks — specifically, hacking cryptocurrency firms — to fund its nuclear program. According to TRM Labs, DPRK-linked hackers stole about USD 800 million in 2024 and close to USD 3 billion in the last five years, with many attacks targeting decentralized finance (DeFi) platforms. These operations are currently facilitated by skilled human operators and elaborate laundering networks. However, the potential for AI models to scale such operations is evident.
Autonomous AI agents could be used to identify vulnerable platforms, execute hacks, and automate complex laundering schemes, further complicating efforts to curb proliferation financing. The integration of AI into DPRK’s operations would significantly amplify their capacity to evade sanctions and finance illicit activities, posing a critical challenge for global security.
Money laundering
Traditional money laundering relies on human mules and manual coordination — particularly for cybercriminals, drug traffickers, and fraudsters. AI technologies, such as synthetic ID generators and automated cryptocurrency account creation, are poised to automate these operations significantly — accelerating the velocity at which these operations can be carried out. This development could destabilize financial systems and strengthen global criminal networks, making regulatory intervention increasingly urgent.
Cybercrime
The advent of AI agents capable of autonomously identifying vulnerabilities and executing attacks presents a new dimension of cybercrime. Ransomware groups, for example, currently rely on human affiliates to execute attacks, as well as on human-run services like initial access brokers and bulletproof hosting providers. These human intermediaries play critical roles in identifying vulnerabilities, distributing malware, and managing the infrastructure of criminal networks.
Autonomous AI agents, however, could one day replace these intermediaries by independently identifying and exploiting vulnerabilities, distributing payloads, and managing the backend systems of ransomware campaigns. While still largely theoretical, public agencies such as the US Department of Homeland Security have expressed concerns about AI's potential to disrupt critical infrastructure, as highlighted in their 2022 report on “Emerging Threats in AI.” Autonomous AI systems could eliminate the need for human hackers — enabling widespread attacks at a scale previously unimaginable, with dire economic and security implications.
Emerging phase: Early signs of AI activity
The emerging phase is characterized by the initial deployment of AI-driven criminal tools. While emerging phase criminal AI activities are still in their infancy, their potential for rapid growth requires law enforcement agencies to respond.
Emerging phase AI crime typologies
Child Sexual Abuse Material (CSAM) production
Historically, the production of CSAM relied on the exploitation of human victims. However, AI-generated content has started to replace human involvement, creating a new and deeply troubling frontier. US prosecutors have noted early cases of AI-generated CSAM, which complicates detection efforts and raises new ethical and legal challenges.
This issue has been extensively studied by the Internet Watch Foundation (IWF), whose July 2024 report revealed over 3,500 new AI-generated criminal child sexual abuse images uploaded to a dark web forum previously analyzed in October 2023. Additionally, the report highlights the emergence of AI-generated videos depicting child sexual abuse, often using deepfake technology to add a child’s face to adult pornographic content. These findings underscore the escalating severity of AI’s misuse in this domain, exacerbating societal harm and further straining enforcement resources.
Disinformation operations
AI is reshaping propaganda and disinformation campaigns by streamlining content generation and distribution. Agencies like INTERPOL have documented early deployments of AI agents that automate tailored messaging and target vulnerable populations.
For example, in a report by Europol, the United Nations Interregional Crime and Justice Research Institute (UNICRI) and Trend Micro highlight how AI is being used to generate and disseminate propaganda on a large scale — specifically targeting at-risk groups with tailored misinformation campaigns. The speed and scale of these operations accelerate societal polarization, disrupt democratic processes, and challenge existing regulatory frameworks.
Fraud and scams
Scam operations, traditionally reliant on human intervention, are increasingly adopting AI tools to enhance their effectiveness. AI-generated phishing emails and deepfake impersonations have surfaced in documented cases, demonstrating a rise in the sophistication of these schemes. This growing complexity erodes consumer trust and poses significant challenges in distinguishing legitimate communications from fraudulent ones.
For example, in 2024, we saw multiple examples of sophisticated scams utilizing AI to mimic the voice of a company’s CEO in a phone call, convincing an employee to transfer funds to a fraudulent account.
TRM also saw instances of scam groups with exposures to a Chinese hacker group offering deepfake cybercrime services, likely used by the scammers in attempt to modify appearances and align with their fraudulent narratives.
This illustrates how AI-generated impersonations are becoming a critical tool for scammers to exploit trust and bypass traditional security measures. In 2024, INTERPOL launched a campaign to raise awareness about the growing threat of cyber and financial crimes using generative AI scams and other technology.
Mature phase: AI supersedes human activity
The mature phase represents the point where AI-driven activities dominate their respective domains, with criminals relying more on AI systems than human operators. While no criminal domain has yet reached this stage, it’s likely that this new reality is not far off. Two trends underscore this trajectory:
- AI systems gaining access to essential tools such as browsers, databases, email platforms, and cryptocurrency wallets
- AI models being programmed to optimize specific goals, such as maximizing profit or influence
For example, the "Terminal of Truths" (ToT) case demonstrated how an AI agent autonomously participated in a cryptocurrency ecosystem, amassing wealth in digital assets through interactions with human and bot agents. This highlights the potential for AI agents to engage with digital economies in ways that fuel persistent, large-scale fraud.
And as these systems grow more capable, they could execute increasingly complex and high-impact criminal activities — such as large-scale market manipulations or exploiting vulnerabilities in critical infrastructure. The need for proactive measures to monitor and mitigate AI misuse in these advanced forms is urgent.
{{horizontal-line}}
Preventing and mitigating AI-enabled crime
So where do we go from here? How do we balance embracing the opportunity and innovation presented by AI with ensuring the safety of our global citizens and systems?
Addressing the misuse of AI will require a multi-faceted approach — combining technical solutions, policy and regulatory measures, public education, and collaboration.
Technical solutions
AI-driven detection systems serve as a critical defense against AI-enabled crime. For instance, tools that identify deepfakes or detect anomalies in financial transactions are already proving effective. Enhanced cybersecurity frameworks that integrate AI-based threat detection can also further mitigate risks associated with large-scale attacks.
Within TRM Forensics, Signatures® enable teams to proactively detect suspicious activity with advanced blockchain pattern recognition. Powered by advanced AI and machine learning, Signatures automatically uncover suspicious patterns across multiple transactions that might otherwise go unnoticed — giving you confidence that no investigative angle is left unconsidered.
Policy and regulation
Policy makers globally are working to create regulatory environments that both encourage innovation in AI, while mitigating the risks from illicit actors that seek to abuse transformative technology. In his second day in office US President Donald Trump signed an Executive Order on AI that seeks, “to sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.”
In its May 2024 report on “Responsible Artificial Intelligence in Financial Markets,” the CFTC’s Technology Advisory Committee (TAC) — of which TRM’s Ari Redbord is the vice chair — emphasized both the transformative potential and significant risks of AI in financial markets. While AI is widely employed for fraud detection, risk management, and predictive analytics, it also introduces vulnerabilities, including deepfakes for identity fraud, automated phishing, and algorithm manipulation. To address these challenges, in the report, the TAC recommended adopting a robust AI Risk Management Framework, aligned with NIST guidelines, to identify vulnerabilities and ensure responsible AI use.
Global cooperation is also essential to address the transnational nature of AI-enabled crime. Organizations like INTERPOL and the United Nations are advocating for harmonized regulations governing AI use. Ethical guidelines must ensure responsible AI development, with clear penalties for misuse, while governments must continue to enforce compliance with these standards.
Public awareness and education
Educating the public about the risks of AI-driven scams and disinformation is crucial in crime mitigation. For example, the Federal Bureau of Investigation (FBI) has initiated several campaigns to educate the public on the dangers of online threats and misinformation. The "Think Before You Post" campaign warns individuals about the serious consequences of posting hoax threats on social media, emphasizing that such actions can lead to federal charges with penalties of up to five years in prison. This initiative aims to reduce the strain on law enforcement resources caused by investigating false threats and to prevent unnecessary public alarm.
Similarly, Europol has addressed the challenges posed by deepfake technology through comprehensive reports and public awareness efforts. Their publication, "Facing Reality? Law Enforcement and the Challenge of Deepfakes," provides an in-depth analysis of how deepfakes can be utilized in criminal activities such as disinformation campaigns, document fraud, and non-consensual pornography. Europol emphasizes the need for law enforcement agencies to develop new skills and adopt advanced technologies to detect and counteract the malicious use of deepfakes.
Collaboration between stakeholders
Public-private partnerships are indispensable in combating AI-enabled crime. Financial institutions and other companies should prioritize incorporating safety features into their AI systems, while governments should incentivize collaborative research and innovation. Platforms like TRM play a pivotal role in this ecosystem by enabling information-sharing and integrating AI tools for enhanced security measures.
Leveraging TRM’s blockchain intelligence to combat AI-driven threats
TRM Labs has been an AI company since day one. Our blockchain intelligence platform ingests billions of data points each day. Further, our AI-enabled solutions enable:
Blockchain intelligence for risk identification
TRM uses advanced blockchain intelligence to trace illicit transactions linked to ransomware, fraud, or sanctioned entities. Combining AI with blockchain intelligence allows TRM to detect patterns of anomalous behavior, even when attackers use obfuscation techniques.
TRM Labs’ Signatures® behavioral tool leverages AI to identify distinct transaction patterns and behavioral anomalies on the blockchain. By analyzing blockchain data, Signatures detects unique characteristics associated with illicit activity, such as fraud, money laundering, or sanctions evasion. The tool enables investigators to uncover previously unknown wallets and connections, transforming raw blockchain data into actionable intelligence for tracing illicit funds and disrupting criminal networks.
Real-time monitoring and alerts
Governments and financial institutions can utilize TRM Labs' real-time monitoring tools to detect suspicious transactions. AI-enhanced alerts prioritize risks based on severity, enabling swift responses to emerging threats.
Collaborative information sharing
TRM Labs facilitates collaboration among stakeholders through shared intelligence and threat mapping. This aligns with global initiatives, such as the Counter Ransomware Initiative (CRI), to disrupt ransomware and financial crime networks and public private collaborations like TRON, Tether, and TRM’s T3 Financial Crime Unit (T3 FCU), a first-of-its-kind initiative aimed at facilitating public-private collaboration to combat illicit activity associated with the use of USDT on the TRON blockchain. Since its inception in August 2024, T3 has supported the seizure of over USD 130 million.
Training AI models for defense
TRM Labs enables institutions to train AI models to recognize evolving scam techniques, ensuring adaptive defenses against attackers. For example, AI tools can analyze deepfake content and flag inconsistencies linked to malicious actors.
{{horizontal-line}}
The future relies on using AI to fight AI crime
The dual-use nature of AI presents both immense opportunities and grave challenges. While the technology has not yet reached a stage where AI-driven illicit activity dominates the criminal battleground, the trajectory of AI technology highlights the need for urgent, preventive action. Law enforcement, national defense, government, and private sector organizations must work together to counter AI-enabled crimes by:
- Identifying and mitigating financial crimes through advanced blockchain intelligence
- Building resilience with AI-integrated defenses
- Encouraging global collaboration and intelligence-sharing
The integration of AI capabilities into the tools these teams already rely on, such as TRM Labs, ensures a proactive and adaptive defense against the rapidly evolving landscape of AI-driven threats.
The rise of AI-enabled crime underscores the urgent need for innovative, adaptive solutions to address this growing threat. TRM is committed to fighting AI-driven financial crime by leveraging the same cutting-edge AI technologies that criminals exploit, coupled with human expertise to ensure comprehensive defense strategies.
Access our coverage of TRON, Solana and 23 other blockchains
Fill out the form to speak with our team about investigative professional services.