Uncover the story behind the 'Biggest Heist Ever' — a gripping new Netflix documentary! Watch the trailer.

US Treasury Issues Report on AI Cyber and Fraud Risks for Financial Institutions

TRM InsightsInsights
US Treasury Issues Report on AI Cyber and Fraud Risks for Financial Institutions

Today, the US Treasury Department published a report on the current state of artificial intelligence related cybersecurity and fraud risks in financial services. The report takes a deep dive on current AI use cases, trends of threats and risks, and best practices for financial institutions and other key stakeholders.

The report comes in response to the October 30, 2023, White House Executive Order on AI which specifically charged the Treasury with issuing a public report on best practices for financial institutions to manage AI-specific cybersecurity risks.

The EO specifically stated:

“Artificial intelligence (AI) holds extraordinary potential for both promise and peril.  Responsible AI use has the potential to help solve urgent challenges while making our world more prosperous, productive, innovative, and secure.  At the same time, irresponsible use could exacerbate societal harms such as fraud, discrimination, bias, and disinformation; displace and disempower workers; stifle competition; and pose risks to national security.  Harnessing AI for good and realizing its myriad benefits requires mitigating its substantial risks.  This endeavor demands a society-wide effort that includes government, the private sector, academia, and civil society.”

Here are a few key themes from the report:

AI can be used effectively by financial institutions for cybersecurity and fraud protection

According to the report, many financial institutions are using AI for employee efficiency with research and writing reports. Some institutions have been using AI for fraud detection for over a decade. Financial institutions are already using AI for cybersecurity “making the institutions reportedly more agile” when it comes to detecting cyber intrusion and other malicious activity. The use of AI and other technology to mitigate risks has become more important in the wake of the growing sophistication of threat actors from hackers to nation state actors.

The report specifically explains, “Generative AI could be used to provide opportunities for educating employees and customers about cybersecurity and fraud detection and prevention measures or for analyzing internal policy documents and communications to identify and prioritize gaps in those measures.”

In fact, recent advances in machine learning have turbocharged AI’s transformative potential in detecting, preventing, and even predicting illicit activity. These advances are especially notable in blockchain intelligence tools like TRM Labs which uses machine learning to associate digital asset wallets to real-world entities. 

AI can present new cybersecurity and fraud risks for financial institutions

While institutions are using AI for fraud and cyber risk mitigation, the report points out that AI presents new and dangerous cyber and fraud risks. 

According to the report, when it comes to cybersecurity risk, “Concerns identified by financial institutions are mostly related to lowering the barrier to entry for attackers, increasing the sophistication and automation of attacks, and decreasing time-to-exploit. Generative AI can help existing threat actors develop and pilot more sophisticated malware, giving them complex attack capabilities previously available only to the most well-resourced actors. It can also help less-skilled threat actors to develop simple but effective attacks.”

According to Treasury, threat actors are using AI to enhance social engineering, malware/code generation, vulnerability discovery, and disinformation capabilities.

When it comes to fraud, Treasury asserts that scammers have looked to AI to create better deepfakes that mimic voice and other human features in more believeable ways. In addition, AI is used by fraudsters to enhance abilities to create synthetic identities – that is, the creation of fake identities using a composite of personal information from various real people in order to open bank, credit card, cryptocurrency and other financial accounts.

Public and private sectors must work together on AI risks including through the development of digital ID

The report sets forth a number of best practices for financial institutions when it comes to utilizing AI and also mitigating cyber and fraud risks and focused on addressing the capability gap, narrowing the fraud data divide, and the potential for regulation in the AI and financial services space. 

The report also notably calls out the need to further explore and improve digital identity in order to thwart cybercriminals who seek to “exploit gaps at all stages of financial institutions’ customer identity processes.” The report asserts that digital identity solutions “may help financial institutions combat fraud and insider threats and strengthen cybersecurity,” and points to the potential use of digital ID to strengthen anti-money laundering and countering the financing of terrorism programs.

What does this have to do with crypto?

While the report does not specifically address AI risk and opportunities when it comes to the digital assets space, most of the best practices outlined in the report apply to virtual asset service providers. In fact, as North Korea and other threat actors continue to attack the crypto ecosystem at alarming speed and scale, the use of AI to bolster cyber defenses is critical.

Fraud and scams are also a mainstay in the crypto ecosystem. According to a new report from TRM Labs, scams and fraud, worth USD 12.5 billion, accounted for a third of all illicit funds in the crypto ecosystem in 2023. Around USD 1.5 billion more was sent to scam and fraud entities in 2023 than in 2022 - an 11% increase  from USD 12.5 billion to USD 13.9 billion, according to TRM research. The data comprises the total volume of funds sent to addresses linked by TRM to scams and frauds; some of the funds appear to be part of the laundering of scam proceeds. Apparent Ponzi schemes were the largest segment of scams and frauds, with around USD 6.6 billion. 

Proceeds from apparent pig butchering, in which criminals use psychological manipulation to defraud victims through fake investment schemes, declined slightly from USD 4.7 billion in 2022 to USD 4.4 billion in 2023.

Given the threat already posed by fraud and scams to the crypto ecosystem, Treasury’s report is an important reminder that bad actors will look to AI and other emerging technology for illicit purposes.

However, most importantly, it is a reminder that technology itself is neither good nor evil, that it can bring both tremendous promise and new perils. As the White House writes in its Executive Order on AI, “In the end, AI reflects the principles of the people who build it, the people who use it, and the data upon which it is built.”

For much more on Treasury’s AI report listen to TRM Talks with the report’s primary author, US Treasury Deputy Assistant Secretary for Cyber and Chief AI Officer Todd Conklin here.

This is some text inside of a div block.
Subscribe and stay up to date with our insights

Access our coverage of TRON, Solana and 23 other blockchains

Fill out the form to speak with our team about investigative professional services.

Services of interest
Select
Transaction Monitoring/Wallet Screening
Training Services
Training Services
 
By clicking the button below, you agree to the TRM Labs Privacy Policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
No items found.