Crypto business
Defense agency
Financial institution
Law enforcement
Regulatory agency
Tax authority
Home
/
Resources
/
TRM Talks
/
TRM Talks: Deepfakes and the Rise of AI-Enabled Crime with Hany Farid

Feb 26, 2025 - 52min

EPISODE 77

TRM Talks: Deepfakes and the Rise of AI-Enabled Crime with Hany Farid

With Hany Farid,  and ,  and  and

AI reshaping society — the way we live and work. However, it is also supercharging illicit activity from deepfakes to automated fraud, large-scale cyberattacks to child exploitation. In this episode Ari Redbord, TRM's Global Head of Policy, sits down with Hany Farid, UC Berkeley professor and leading expert on deepfakes and AI-driven deception, to unpack how criminals are exploiting artificial intelligence to manipulate reality, commit crimes, and challenge global security.

Hany kicks off the episode with a jaw-dropping live deepfake demonstration, impersonating Ari in real time — a stark warning about how AI-generated identities can be used to deceive businesses, governments, and individuals. From fraudsters using deepfakes to bypass KYC and infiltrate financial institutions, to nation-state actors leveraging AI to spread disinformation and wage cyber warfare, the conversation explores the far-reaching implications of this rapidly evolving technology.Ari and Hany also discuss real-world examples of AI-driven crime, including:

  • Synthetic identity fraud and deepfake scams targeting banks and crypto platforms
  • Cybercriminals using AI-powered phishing and social engineering attacks
  • The role of AI in North Korea’s cybercrime operations
  • How deepfakes could be weaponized for political and national security threats
  • The ethical and legal challenges surrounding AI-generated content

Despite the risks, there is hope. Ari and Hany dive into the critical need for AI-powered defenses, regulatory frameworks, and public-private collaboration to combat these emerging threats. They discuss the importance of blockchain intelligence, deepfake detection tools, and cybersecurity innovations in staying ahead of bad actors who are weaponizing AI. This is one of the most eye-opening TRM Talks episodes to date — don’t miss this deep dive into the intersection of AI, crime, and security.

Click here to listen to the entire TRM Talks: Deepfakes and the Rise of AI-Enabled Crime with Hany Farid,. Follow TRM Talks on Spotify to be the first to know about new episodes.

Ari Redbord

(00:02): I am Ari Redbord and this is TRM Talks.

(00:06): I'm Global Head of Policy at TRM Labs. At TRM, we provide blockchain intelligence software to support law enforcement investigations and to help financial institutions and cryptocurrency businesses mitigate financial crime risk within the emerging digital asset economy. Prior to joining TRM, I spent 15 years in the US federal government, first as a prosecutor at the Department of Justice, and then as a treasury department official where I worked to safeguard the financial system against terrorist financiers, weapons of mass destruction, proliferators, drug kingpins, and other rogue actors. On TRM Talks, I sit down with business leaders, policymakers, investigators, and friends from across the crypto ecosystem who are working to build a safer financial system.

(00:51): Today on TRM Talks, I sit down with AI deepfake expert and UC Berkeley, professor Hany Farid. But first, Inside the Lab, where I share data-driven insights from our blockchain intelligence team.

(01:06): In this week's Inside the Lab, we're discussing TRMs latest report, "The Rise of AI-enabled Crime," exploring the evolution risks and responses to AI powered criminal enterprises.

(01:19): Malign actors are increasingly leveraging AI to carry out hacks and fraud, create deepfakes for extortion and misinformation, and execute cyber attacks at massive scale. The new report highlights these threats and explores how AI can also be harnessed to mitigate them. AI is being used to enhance traditional cybercrime tactics from deepfake enabled fraud to automated money laundering and phishing. Criminals exploit AI to automate phishing campaigns, making them more convincing and scalable, identify and exploit security vulnerabilities in networks in software, launch autonomous attacks against critical infrastructure and evade detection by adapting malware behavior in real time.

(02:03): To combat these emerging risks, TRM's blockchain intelligence and AI-driven analytics are essential by integrating AI power detection tools. With blockchain intelligence, investigators can identify suspicious transactions and trace illicit financial flows, uncover hidden networks of criminal activity, authenticate identities, and detect deepfake-generated fraud, and prevent large-scale manipulation of financial systems. Governments, financial institutions, and blockchain intelligence firms must work together to build resilient systems that leverage AI for defense just as criminals use it for offense.

(02:40): Public-private partnerships will play a critical role in developing threat intelligence, enforcing regulations, and ensuring that AI remains a tool for security rather than exploitation. TRM's report provides an overview of evolving threats and the tools being developed to help law enforcement and national security agencies use AI to disrupt illicit actors. The future is here. Understanding the evolving threat landscape and leveraging innovation is crucial to staying ahead of AI-enabled crime. TRM is committed to this mission. For the full report, go to our website, TRMlabs.com and read "The Rise of AI-enabled Crime," exploring the evolution risks and responses to AI-powered criminal enterprises.

(03:31): And now, AI crime expert Hany Farid. Hany, I am so psyched for this conversation today. Before we kick this off, you came on here essentially as me and really kind of started things off with a deepfake. Tell folks about that before we dig into everything else we're going to talk about today.

Hany Farid

(03:53): First of all, what I loved about that, and for the people listening, you should go watch the video. You both you and your producer were quite startled. There was this moment where there was like, is this a glitch in the Matrix because it's a little weird. So here's what I did. It's incredibly easy. Now put your name into a Google search. I found a screenshot where you were standing in front of the camera, you happen to be wearing a tie. And by the way, I think the same jacket and shirt that you're wearing now . . .

Ari Redbord

(04:17): Possibly, yes. Blue suit and blue striped shirt guy.

Hany Farid

(04:23): And I took a single screenshot of you, this is important, a single image of you from a single frame and a single video that you didn't interview on. And I put that into a piece of software that I paid 10 bucks for. That creates what's called an avatar deepfake. And the way this works is I now was sitting in front of my computer when I connected, and as I was moving, raising my eyebrows, blinking, moving my mouth, moving my head, it was animating that single image to make it look like it was you moving. Now, the voice, of course, was me. I could have done voice modulation, I didn't. It just took a little bit more time than I had to in preparation for this. And it generates all that in real time. And then I simply take what we are talking on right now and point that to the piece of software and the output that it is producing. And so now in real time, 30 frames a second, I can get onto a video call and I can impersonate another person, their likeness and their voice. And it is very difficult to tell the difference. There's a few tells. You can probably tell if I move too much, you'd see a little glitch, of course, having me back in a year from now, and that will be gone and you'll be blown away by the things that we can do.

Ari Redbord

(05:31): Yeah, I mean, part of our conversation today is around I've never seen anything move faster from a technology perspective, and it just gets better and better every day. You mentioned the voice piece, and just to sort of double down on that, you also mentioned to me that that's pretty easy to do. So in other words, I'm on the zoom, my image, my likeness, and my voice. 10 seconds, 20 seconds of audio.

Hany Farid

(05:49): Yeah, so to your point, a couple years ago you needed eight hours. If somebody's recording, so somebody like you with a popular podcast, I could probably cobble together eight hours. But for the average person, no. But today it's about 20 seconds, 15 to 20 seconds of your voice, you talking just normally you don't have to be saying anything in particular. I can upload that to a service that I pay $5 a month for. I click a button that says I have permission to use air's voice, which even if I don't, I can click the button of course, and then I type and it will synthesize anything I want. That's an asynchronous attack. But I can also do it synchronously. That is I can train a model. It takes a little bit more audio to do that. And then in real time as I'm talking into this microphone, it is modulating my voice to sound like you. And that's of course much more interactive.

Ari Redbord

(06:37): That's essentially what you were doing just a moment ago without the voice piece, just because it took a few more seconds, but . . .

Hany Farid

(06:42): Correct.

Ari Redbord

(06:43): Yeah, that was what was so crazy about it. And folks are only going to see a video of this, but this wasn't a video, this was in real time happening on the screen.

Hany Farid

(06:49): I connected on the call. This wasn't stage. I connected on the call and you and I were talking.

Ari Redbord

(06:53): Yeah, you scared the crap out of Wesley who was on here with us a moment ago. Yeah, no, it was totally wild.

Hany Farid

(06:59): My work is done here.

Ari Redbord

(07:00): We have never gotten into a TRM Talks as fast as we just got into this one. So I'm going to back up a moment. I'm really excited to have this conversation today. Talk to me a little bit about your journey into this. How did you end up with this focus?

Hany Farid

(07:14): Yeah, so I've been an academic for more than 25 years now, 20 years at Dartmouth College on the east coast and more recently here at UC Berkeley. I'm also co-founder, as you mentioned, at the top of the hour, of get Real Labs, which is a company that's working in the space of and manipulated media detection. I got started in the early two thousands at a time when digital was nothing. The internet was nothing. AI was nothing. And it's not that I was particularly prescient, but you didn't know that digital was coming. You knew something was bubbling up with the digital technology. You knew something was bubbling up with the internet technology. Nobody really knew what was coming with today's AI. And I started worrying about what happens when media, images, audio, and video go from the analog world to the digital world. And in particular, and I think this will resonate with you as a former federal prosecutor, is what do you do with evidence in a court of law?

(08:06): How do you authenticate critical decision-making when you are dealing with something that is inherently malleable? And for the first 15 years of my career from about 2000 to 2015, it was a pretty bespoke field to the point that you just made a minute ago. Also, we would measure changes in technology in many, many months, typically 12 to 18 month cycles. And it was a good bit, right? We have a good gig. We developed some forensic techniques, we'd use them in courts of law, we'd help out media agencies, we'd help out national security and law enforcement, and a year and a half would go by and new technologies would come and we'd develop new defenses. And then around 2016, 17, something, there was a sea change with generative AI and ai, and suddenly we started measuring change, not in months and years, but in weeks and days.

(08:55): And it got weird really fast. And suddenly over a very short period of time, we went from you needed some skill to create manipulated media to you download a piece of software for 10 bucks and I can get on a call and look exactly like you. And of course, the technology has advanced very quickly. The threats have advanced very quickly and we are scrambling to keep up, which is part of the reason why I co-founded the company was because the academia simply can't keep up with that pace. And we really wanted to put these tools in the hands of people who need it to be able to figure out what they are looking at. And is it real or is it fake?

Ari Redbord

(09:29): Amazing. So tell me about Get Real.

Hany Farid

(09:31): Yeah, so we are building a suite of tools that will analyze images, audio, and video, both what we call asynchronously and synchronously. And the underlying technologies are quite similar, but the operationalizing of course, is very different. Integrating into a team stream or a Zoom stream or a phone stream is very different than sitting back and waiting for a file. But underneath it, we are in the business of developing technology that will help individuals, organizations, governments, protect against malicious, manipulated media.

Ari Redbord

(10:03): It's extraordinary. As we're sitting here right now, I want to say yesterday I was online and I saw this video with all of these celebrities with the shirt, with the middle finger up to Kanye, are you tracking this?

Hany Farid

(10:15): I saw it.

Ari Redbord

(10:16): And I said to myself, oh yeah, that feels right. Okay. That didn't feel out of the ordinary. And a friend of mine shared with me last night the fact that this was a deepfake, but I was floored that it was a deepfake. It was perfect. Tell me about that.

Hany Farid

(10:29): Okay, first of all, what's impressive about this? When I first saw it, I was pretty sure it was a deepfake, but I do this for a living. But I thought what was done was that they took a video of actual people with that shirt and then just did a deepfake of the face to make it look like celebrities. And that was impressive, by the way. But it's even more impressive than that. We just learned just an hour ago, the entire video was AI generated all of the motion of the humans, the full thing. It was done with two different technologies, one to generate the human motion as you see the person moving. And then they did what's called the face swap deepfake onto that. The entire video was AI generated, and I swear to God, I did not know that the first time I looked at it.

(11:13): That blew me away, Extraordinary.

Ari Redbord

(11:15): When I saw this yesterday, I was like, I have got to ask you about this because I don't know. Look, I mean they're going to be more significant deepfakes very, very soon, but this was pretty viral. And that becomes really, really interesting. We're going to dig way deeper into this, but walk me through, you sort of talked about the early days and as this, whether it's deepfake or AI-enabled, this type of AI-enabled activity.

Hany Farid

(11:38): So there's been a few inflection points. So certainly let's start by agreeing that we've always been able to manipulate photographs. If you go back to the early 19 hundreds, Stalin famously airbrushed people out of photos. The history is riddled with manipulated images, but that done in the analog world and very, very few people typically state sponsored actors could do it. And of course, the distribution channels were simply putting a photo that had been manipulated into a history book. But now fast forward to the beginning of the digital age where we started getting digital cameras and Photoshop was pretty good. So we could do things like create an image with two people standing side by side or an image of a shark swimming down the street after a storm, which is very popular even today and now. But distribution channels are changing because in the early days of social media, I could do more than just put it into a history book.

(12:25): I could put it online and it can get viral and in a lot of views. And then the next inflection point, and this is by the way, the standard. It's very predictable. We make technology easier to use, more ubiquitous and cheaper. This has been the trend for the last a hundred years almost. And that's what we've been doing with digital manipulation. So all AI did fundamentally, which is make it easier. And also the distribution channels have dramatically changed because now social media dominates the landscape and suddenly the ability to distort the reality is very, very real in terms of creation, dissemination and amplification.

Ari Redbord

(13:02): Talk to me a little bit about how AI has changed the way that criminal actors operate. What is the threat landscape today as opposed to even a few years ago?

Hany Farid

(13:12): So I think somewhat predictably, because we have seen this since the early rise of computers, internet and mobile revolution is cyber criminals, state sponsored actors will figure out how to exploit this technology. So what are we seeing with cyber criminals? Alright, let's go through the list. So I've already mentioned this, but it's worth mentioning again, people are creating non-consensual intimate imagery in some cases that is designed to embarrass, but in some cases it's designed to extort. So they will create an explicit image of a woman or a man or a child in many cases, send it to them and said, if you don't do this, pay us in Bitcoin, pay us in cash, create content for us of yourself. We will distribute this material to your friends and family. And it is awful, awful, awful things. You are seeing impersonation attacks where people are getting phone calls in the middle of the night saying, mom, dad, I'm in trouble.

(14:02): I've been in an accident. I need $9,000 to get out of jail. Here's a police officer, right? So small scale fraud, you've seen large scale fraud. We have seen companies lose tens of millions of dollars because they are on a call like this one talking to who they think is their CEO and they're making wire transfers. We've seen that as well. We've seen the count takeovers where people will contact tech support saying, oh, I lost my password. And they are a deepfake, so they look like the count holders. You can do that with voice, you can do that with video. And what we've seen over the last few years is two things. One is the tech is getting very good. I mean it really is, and you've seen it in very dramatic ways, but also it's getting easier to use. It used to be you'd have to go to GitHub and download a repo and get it compiled and get your GPU configured, and now it's all easily readily available online. And the cyber criminals, and by the way, terrorists and extremists have latched onto this technology very early on and are using it. These are not hypothetical threats. We are seeing them unfold every day.

Ari Redbord

(15:05): What are the key case examples that you're seeing or examples that you've focused on when you talk about sort of these threats?

Hany Farid

(15:12): The ones that keep me up at night the most are the ones you're seeing infecting children. So we have seen horrific, horrific cases where young kids are taking their lives because what people do is they take a kid's likeness with a single image. This kid's got a single image of themselves online and they insert that into explicit material of child abuse material and they send it to the kid and they say, if you don't take images of yourself that are explicit, we will distribute this. And so the kid is paranoid and in seconds starts taking the photos, sends it to them, and now the game is on. You've seen other examples where kids will be talking to who they think is a young teenage girl and we'll convince them to take an explicit photo, which they will, and then game is on. And we have seen horrific cases where young boys and young girls are taking their own lives because they are getting cornered and they don't know what to do.

(16:04): And these are organized scams being perpetrated by gangs, and it is horrific. You're seeing it. By the way, also at the organizational level, I've already mentioned you've seen major, major attacks. And by the way, for every one of those attacks that you read about in the paper, there are 10 that you are not reading about. Nobody wants to talk about these attacks. So millions, hundreds of millions of dollars are being stolen in cyber attacks and nobody knows what this is yet. They haven't even figured out how they're being attacked. Here's another one. By the way, imposter hiring North Koreans have actually perfected this where they will apply for jobs as somebody who they are not. They are North Korean hackers and they're not hacking your firewall, they're hacking hr, they're getting into your company and then installing malware. And it's frankly easier because you and I both know that social engineering is better than hacking the firewall. And so you're seeing real problems and real weaponization of these technologies that are affecting individuals, organizations, and entire governments.

Ari Redbord

(17:00): Yeah, it's really well said. I mean, we focus a lot at TRM on North Korea cyber attacks because so many of them are against the cryptocurrency space stealing billions of dollars over the last five years. And when you write about North Korean IT workers, it's always like, Hey, HR best practices is make sure their camera is on. Make sure you're talking to a real person and you jumping on as me to this call speaks exactly like those days are over, right? That's not really a precaution that's going to work anymore. It's not enough in this new world, it's not

Hany Farid

(17:30): Enough. I was talking to a CISO the other day, a chief information security officer, and he said they are close to requiring in-person interviews because they just can't trust what they're doing online anymore. And that is wild. And so I do think that almost this convenience that we have adopted over the last God five 10, do we have to throw it away? You simply can't trust. The only thing I can trust is when I can reach out and touch you. That is a weird world we're entering in.

Ari Redbord

(17:57): And I guess that maybe is a good segue to how do you think about mitigating these risks?

Hany Farid

(18:02): Yeah, so first of all, you're absolutely right. These risks have always existed, but now they are on steroids because everything is automated and more sophisticated. You're not just getting a text message saying, Hey, this is your boss. You're on a video call with your boss, and that lands very differently. So let's talk about mitigation strategy. So as somebody who thinks very carefully about cybersecurity, I don't think this will surprise you, is that there's no magic answer here. We need lots of things. So let's start in no particular order. And I say this first one reluctantly, but I think we need some regulation. We need guardrails that will allow for innovation and for us to figure out how to leverage this incredible technology, but also put some reasonable safeguards. We tried let the internet be the internet for 20 years and it's not great, let's be honest.

(18:46): So let's figure out how to put some reasonable guardrails for the private sector to say, look, let's live within these lanes and create some safety. So that's number one. Number two is we need good tech. And that's sort of my world. We need the ability to figure out if the person you're talking to is real or not, end of story. And that requires huge amounts of investments from academics, from the governments, from the private sector, from the VC world. I would argue, by the way, there's a phenomenal imbalance in investment. If you look at investments in generative AI company, you can measure that in the many, many, many billions of dollars. If you look at investments in cybersecurity on preventing fraud from that, you can probably measure that in the tens of millions. And that doesn't seem fair to me. I feel like that's unfair.

(19:28): And I think we need to start taking the defensive side more carefully on the tech. I think we need this conversation. I think people need to be aware of what is possible. We need to talk about these things, and people have to know the way we did with spam, the way we did with malware the way we did with viruses. Don't click on the link. Make sure you know who the sender is from. Be careful with H-T-T-P-S versus HTTP, and we created public awareness and it helps. So we need public awareness. And here's the last one, and maybe I should have started here, is we need for the tech companies to start taking this more seriously. We have to start building safety in from the ground up, not years later after the harm is done, when it's a lot harder. The tech companies have to recognize that lots of cool things are coming from ai, but harm is coming as well. And they've got to start building in safety protocols from the very beginning. And that's also where regulation, if I can loop around, can help incentivize to say, look, if you create a product that you knew or should have known is going to create harm, we're going to hold you responsible for it. So I think it's a combination of all of these things. This as well as anybody, everything we talk about are mitigation strategies, right? And I think you even said it, you didn't say, how do you prevent it? How do you mitigate?

Ari Redbord

(20:37): Absolutely. It's interesting. I think when you're talking about the investment in the space, I think regulation, I think compliance building as critical infrastructure could be a business advantage. I want to go on the social media platform where I know what I'm going to see is real along the education piece, which I love, and this is why I wanted to have this conversation. TRM put out a report recently on AI-enabled crime. The US Treasury put out a paper on AI-enabled cyber and fraud risk for financial institutions. We have to be out there talking about these things. I have put out a couple videos over the last few weeks where I tried Tofa myself, seen I didn't do a great job. You are way better, but this is me trying to learn this stuff in prep for this, you sent me two images. One is me in jail and the other is me with a machine gun. And I'll post those for folks to see also. And then also this video where it was a real CNBC appearance that I was on, but then all of a sudden I break into Spanish about halfway through. What should people be looking for when they're seeing these things?

Hany Farid

(21:38): So first of all, I can tell you that these two images were done in about five seconds. I pointed a service that, it's an organization that I advise that tries to create public awareness. It's called civic ai. And what you do is you give it a link to a LinkedIn page or a Wikipedia page that has your photo, your LinkedIn page has a photo, and then you click a button that says, create a deepfake using this person's face. And that's all I did. It was literally a single button click away. It was nothing.

Ari Redbord

(22:05): It's not like they're just taking, this is not Photoshop, right? They're not just imposing your image. They're almost recreating the look you should have in the moment you're in.

Hany Farid

(22:13): And by the way, different head pose, different lighting, different expression. So this is full blown what's called text to image. There's a prompt saying, put the space in and it generates the image. And I can keep clicking all day long, by the way. I don't make one image after another image after another image, and I can prompt it with whatever I want. And it's very sophisticated and the images are very compelling. And to your question now before we get to the videos, how can people tell they can't? They can't. You have to recognize this. We can because we're pretty good at this. But the reality is we have done perceptual studies where we show people images, real images, fake images. Most people are slightly above chance. And the only predictor of how good you are is actually an inverse prediction. The more confident you are in your ability to do this task, the worst you are at it, images have effectively passed through the uncanny valley. They are almost indistinguishable from reality, at least reliably. So

Ari Redbord

(23:06): No, that's extraordinary. And the video . . .

Hany Farid

(23:07): Yeah, let's talk about the video. This is my new favorite obsession. So what I did is I went, it was A-C-N-B-C interview where you were talking about some crypto frauds and the trends we've been seeing. I took that video, I clipped it down to 30 seconds just to make it a little bit more manageable. Again, I uploaded to a service that I pay a couple of bucks for and I simply said, translate this into Spanish. So first of all, it had is the video you talking. So first thing it did is it transcribed the audio. So it went from audio to text. That's ai. Of course it translated English to Spanish. That's ai. It synthesized somebody speaking Spanish, but it's your voice. So notice that's not some random person. So that's AI number three. And here's the kicker. It then regenerated a video with your mouth moving speaking Spanish. So four ais, click of a button. It took maybe five minutes in the queue. And I had the video. And then what I did is I just brought it in and I spliced the two videos together. I like when it goes from English to Spanish because it's a little jarring. You can see

Ari Redbord

(24:12): It. It was really jarring. This thing is perfect. And look, this is just me messing around in Spanish on a thing. But what if this is Kim Jong-un declaring that he's launching on South Korea, right?

Hany Farid

(24:24): Correct.

Ari Redbord

(24:24): Talk me through that.

Hany Farid

(24:25): Here's the nightmare situation. Geopolitics. It would be possible for anybody today to create an image of a world leader, North Korea, Russia, China, the US of that world leader saying, I've launched nuclear weapons against, fill in the blank, take that video, post it online, have it go super nova viral. And before anybody figures out what's going on, we're off to the races with a global nuclear meltdown. Do I think that's likely? No. Do I think it's possible? A hundred percent. I think this is more of a when if this will happen.

Ari Redbord

(24:57): I had to ask crypto questions always. It's TRM Talks. First of all, you have this sort of the deepfake North Korea piece, phishing attacks, social engineering is supercharged with this technology. You had a great example from a recent sort of KYC experience you had at a crypto exchange talk our folks through that.

Hany Farid

(25:14): Yeah, this is an interesting experience for me and sort the world I occupy. So a couple years back, my nephew got me a fraction of a Bitcoin and I thought, oh, that's cute. And I sort of ignored it for a couple of years. And the other day I happened to be reading an article. I'm like, I wonder what happened to that Bitcoin. So I opened up my Coinbase and I tried to access the wallet and said No. And they said, oh, we can't verify your identity. It's a new phone. I'm like, alright, whatever. And then I thought, you know what? I was curious how they're going to validate my identity. And they did exactly what I thought they would do. They said, okay, open up your camera. Hold it up to your face. Hold up your ID and we'll validate your identity. And I remember thinking, well, this is really dumb. I could have faked this so easily. I could have made myself, I could have created a fake id, my fake face with Hany because Hany has a digital footprint. But that's how it validated me. There was no human in the loop. It was a pure KYC with video authentication. And I thought, well, that's pretty easy to spoof.

Ari Redbord

(26:05): What do you say to folks, whether they're in the compliance space or from financial institutions or crypto businesses, what's the answer to this? Obviously it's implementing technology like what you're building, ensuring that you have robust blockchain intelligence solutions like TRM. But what is your message there?

Hany Farid

(26:21): Yeah, well, first message is it's a brave new world and whatever you were doing yesterday is not going to work today. So if you are a bank and you are doing authentication based on voice, my voice is my id, stop it. And if you're a customer, stop it now. And the way I think about it is think about the risk, right? Think about what is the risk associated with getting this person's identity wrong? In some cases, the risk is low and in some cases it is exceedingly high. And so I think if you're a financial institution, if you're a crypto exchange, you've got to fundamentally rethink KYC. You've got to rethink the way you've been doing things. They don't work anymore.

Ari Redbord

(26:56): I have a question on my list that I realized it's just ridiculous. I think I was about to ask you something around where do you see this space headed in the next five or 10 years? Hany, where do you see this space headed in the next five or 10 minutes?

Hany Farid

(27:09): So first, just to frame how fast it is, because it doesn't just feel fast. It is, let me give you some framing for this. The PC revolution, the personal computer revolution was roughly 50 years from the very beginnings of the electronic computer to when about half of American households had a computer, five decades, which felt fast. By the way, the internet revolution was about 25 years from the very beginnings of the HTML protocol to half of the world's population being online. Mobile revolution was about five years, first iPhone to half the world's population having a mobile device. And OpenAI went from zero to 1 billion users in one year. Now arguably, AI started back in the 1950s, but was more or less not a lot going on, and then just very suddenly 50, 25, 5, 1. It really is accelerating. So now let's get back to your question. I think there's going to be two things that happen.

(28:02): One is there will be more of the same. The LLMs will get better, the large language models will get better. The generative AI will get better. AI will start to become integrated into our devices. I think we're at a fork in the road in terms of does the technology primarily work for us or does it start working against us? And that's the part I don't know because if you look at technology today, nobody can deny phenomenal advances, but you also can't deny real harm from technology, from the frauds to the scams, to the misinformation to the conspiracies. And I am concerned that if we keep going down the road of the last 20 years, technology is going to start working against us and not with or us.

Ari Redbord

(28:41): It's an extraordinary moment. There's no way I'm going to let us end on an existential threat to humanity. Thank you. That's just not how I roll on TRM at all. We are optimistic. We are forward looking, and I am really overwhelmed by the good that it can bring to building a business or the things that we do. I mean, you have built a company based on stopping bad actors from using this technology, as have we. And it's like, I think at the end of the day, good tech will meet the moment of this sort of situation.

Hany Farid

(29:10): I want to believe that. And by the way, I can tell you that we have built a company over the last two and a half years and not a single researcher or engineer in our company doesn't use a large language model to write every single line of code or help us write every single line of code in our software. This is an accelerant. We do things now in hours that used to take weeks. I still write code by the way, and I will write entire techniques operationalized in hours. What would normally take me weeks and weeks to do. It is phenomenal.

Ari Redbord

(29:39): It's absolutely extraordinary. You have to promise to come back on TRM Talks in like five or 10 minutes when this space changes.

Hany Farid

(29:46): I'm so grateful to people like you talking about this stuff because I think it's important. I think everybody has to listen to this. I don't want to scare the bejesus out of people, but we got to take this stuff seriously. And let's have this conversation again in six to 12 months. I mean, I'll probably send you an email in a few minutes and let you know what I just found. But let's talk again because I do think by the end of this year, the landscape will have shifted underneath us. And I think there will be new and exciting and terrifying things to talk about.

Ari Redbord

(30:10): If anything is going to scare people, it's going to be AI Ari on this Riverside platform. Kicking off our show today. Thanks for joining TRM Talks. That was really one of the most extraordinary TRM Talks we've ever had. And I think in large part is, look, we're in this technology moment where the world is moving faster than it's ever moved before. And talking to Hany, really one of the foremost experts in the world in understanding this moment was really extraordinary. I mean, when we put out a report a few weeks ago on AI crime, we want to understand on the one hand, what are the activities, what are the ways that criminals are leveraging this technology? And there's no more expert in the world on DeepFakes in particular. And DeepFakes aren't just for the sake of DeepFakes, right? This is how cyber criminals engage with KYC platforms, right?

(31:00): Cryptocurrency businesses. This is the way North Korean cyber criminals can potentially get jobs in the IT space. And this is the way pig butchering and other types of activity can supercharge their actions. And really talking to Hany was amazing to get his perspective, but also to see the way he's leveraging the technology to understand it. I mean, the video of me was something I'll never forget. And my response, Wesley's response when we first saw this was very much like, wow, I can't believe I'm seeing this right now. I think that's what so many people are having. And the real question though, the stuff that's most interesting to me is, okay, we're going to have these really existential risks out there to the financial sector, to governments, to national security, but how are we going to mitigate those risks? And I still think at the end of the day, it was an optimistic conversation that building technology like TRM, blockchain intelligence and the work that he's doing it get real. Trying to understand deepfakes is going to play a critical role. And I've always believed that we can build technology and our building technology to meet the moment. This conversation, you can probably hear it in my voice. I'm just so energized about it. So energized with the work that we're doing at TRM on this issue. And we're going to do a lot more conversations like this.

(32:12): Next on TRM Talks, I sit down with crypto compliance expert Brandy Reynolds. If you love the show, leave a review wherever you're listening to it and follow us on LinkedIn to get the latest news on crypto regulation, compliance, and investigations.

TRM Talks

(32:31): TRM Talks is brought to you by TRM Labs, the leading provider of blockchain intelligence and anti-money laundering software. This episode was produced in partnership with Voltage Productions. The music for this show was provided by E-Clicks.

Ari Redbord

(32:47): Now let's get back to building.

About the guests

Hany Farid
University of California, Berkeley

Hany Farid is a Professor at the University of California, Berkeley with a joint appointment in Electrical Engineering & Computer Sciences and the School of Information. He is also the co-founder and Chief Science Officer at GetReal Labs. a cybersecurity company focused on preventing malicious threats from generative AI.

Hany's research focuses on digital forensics, forensic science, image analysis, and human perception. He received his undergraduate degree in Computer Science and Applied Mathematics from the University of Rochester in 1989, and his Ph.D. in Computer Science from the University of Pennsylvania in 1997. Following a two-year post-doctoral fellowship in Brain and Cognitive Sciences at MIT, he joined the faculty at Dartmouth College in 1999 where he remained until 2019.

More TRM Talks

EP. 78  |  Mar 12, 2025 - 30min

TRM Talks: A Big Week on the Road to Legal Clarity with Uniswap Chief Legal Officer Katherine Minarik
PLAY EPISODE

EP. 77  |  Feb 26, 2025 - 52min

TRM Talks: Deepfakes and the Rise of AI-Enabled Crime with Hany Farid
PLAY EPISODE

EP. 76  |  Feb 12, 2025 - 33min

TRM Talks: New Frontiers in Finance — A Journey from State Banking Supervisor to Compliance Leader with Two Ocean Trust's Albert Forkner
PLAY EPISODE

EP. 75  |  Jan 29, 2025 - 40min

TRM Talks: Former Treasury Official and Circle's Caroline Hill Talks Stablecoin Compliance, Policy, and Regulation
PLAY EPISODE

EP. 74  |  Jan 15, 2025 - 31min

TRM Talks: Future Money: How Crypto, AI, and the Technology Revolution are Changing Finance with Citi Institute's Ronit Ghose
PLAY EPISODE

EP. 73  |  Jan 1, 2025 - 33min

TRM Talks: How the US Treasury Combats Illicit Finance in the Cryptoverse with Caroline Horres
PLAY EPISODE

EP. 72  |  Dec 18, 2024 - 33min

TRM Talks: TradFi to DeFi — How Standard Chartered is Building the Bridge to the Future of Finance with René Michau and Esme Hodson
PLAY EPISODE

EP. 71  |  Dec 4, 2024 - 33min

TRM Talks: Talking Crypto Tax Policy and Guitar Riffs with the IRS’s Seth Wilks
PLAY EPISODE

EP. 70  |  Nov 20, 2024 - 29min

TRM Talks: Ancient Coins to a $3.6 Billion Laptop: The Evolution of Money with the Smithsonian’s Dr. Ellen Feingold and former IRS-CI Special Agent Chris Janczewski
PLAY EPISODE

EP. 69  |  Nov 6, 2024 - 29min

TRM Talks: Securities to Sanctions: A Perspective on the Legal Issues Defining the Crypto Space with Matt McGuire
PLAY EPISODE

Subscribe to TRM Talks

Subscribe to be the first to hear about new episodes, and to stay in the know about all things blockchain technology and crypto policy.