The Arup Deepfake Scam transcript

Introduction

This is “What Just Happened?,” the podcast that looks at the biggest brand crises of our time, what they meant for organisational strategy and behaviour, and their lasting impact on our approach to crisis communication.

I’m Kate Hartley. And I’m Tamara Littleton. And together, we’ll delve into what happened, why it mattered, and whether it could happen again.

Episode

Tamara Littleton: We’re keeping it current for today’s episode, but we’re also shaking things up a bit by not talking just about one crisis, but a whole subject that’s really worrying companies at the moment—and one that I think we’ll keep coming back to on this podcast. Kate, do you want to tell us what we’re talking about today?

Kate Hartley: Yes. So today we’re talking about the rise of deepfakes, and one incident in particular that I think really, really shook the corporate world—which was the Arup engineering scam. Certainly, at the time that we’re recording now, it’s still the biggest deepfake scam we’ve ever seen, or at least the biggest one that’s publicly known.

So, what happened? In February 2024, Hong Kong police reported that a finance worker in Hong Kong had paid 200 million Hong Kong dollars—about 20 million pounds—to criminals, and that was the result of a deepfake scam. That finance worker was part of what was then an unnamed company—it didn’t come out that it was Arup until a few months later—and they had been asked to make 15 different transactions to five different bank accounts.

You can see how, for a big company, those kinds of transactions, that sort of amount, might seem quite normal. But the scammers asked the finance worker to get on a video call to approve the payments, which they did—obviously, that’s the right thing to do. They saw on the call what appeared to be their Chief Financial Officer and other members of the finance team, and so the payments were approved and the finance worker made them.

But of course, it turned out that the finance worker had been the only real person on the call. All the others were deepfakes, and the finance worker only realised that when they spoke to the company’s head office.

TL: I mean, this is so shocking. It’s kind of like the stuff of, you know, movies or something like that. Because we know that deepfake images, and even audio, are quite common—but video like this is a whole new level of sophistication in these kinds of scams.

And everyone’s been talking about it, but do we know how it was done?

KH: Everyone has been talking about it. It’s still, I think, really shocking. We know a little bit about it, but not a massive amount. The Hong Kong police said that the criminals had downloaded videos in advance and added deepfake audio to them.

Now, we don’t really know the detail of exactly what software they used, for example, to do that—but it must have been pretty sophisticated to fool someone on a call. The Hong Kong police certainly said they looked like real people. What we don’t know is things like the quality of the video—was it on all the time? Was it blurry? We just don’t know those sorts of things.

TL: I wonder if there was something—because there were so many people on the call. I’m just sort of hypothesising… I can’t even say it—actually, I’m hypothesising. But it must be easier to spot a single person that is a deepfake. With lots of people, maybe it’s a bit harder to spot.

KH: Well, I don’t know. Am I? Am I a deepfake? Do we? Do we know we’re deepfakes—not deepfakes—on this corner? I don’t know.

TL: Do you? I think we need a code word to make sure. But you know, actually, I don’t think a deepfake could do your version of the laugh, to be honest.

KH: That’s rude.

TL: So, so digressing—I am digressing—but this actually is, it’s really scary. It’s a scary moment in time. I think it’s so unbelievable—but it happened.

Obviously, as you said, it involved Arup Engineering, the British engineering firm. What was the impact on Arup, and how did they handle it?

KH: So, yes, it only came out that it was Arup a few months later. But I think they probably handled it pretty well. It was lucky, in a way, I think, that it came out a few months later.

In terms of impact, Arup turns over about 2.2 billion pounds a year. So although 20 million is not nothing, it’s not necessarily a catastrophic amount for an organisation that size. But I did feel quite sorry for them, because this is the first scam of this type using video deepfakes—at least that people know about. They’re always going to get referred to, aren’t they, by people like us doing podcasts on it.

And I don’t necessarily think that’s the kind of pioneering reputation you want, to be honest. But I think their response was pretty good. In its statement, it said the usual—“we can’t comment on an ongoing investigation”—which I think is fair enough. But they were very clear that it didn’t impact the financial stability of business operations, and that no internal systems had been compromised.

I think because it hadn’t been named straight away, it was clear by the time it was outed as being Arup that it actually wasn’t going to suffer financially. So that probably helped, to be honest. Normally, we would say get ahead of the story and those sorts of things—but in this case, I think it probably helped that they weren’t named straight away.

And they are owning it. The Chief Information Officer, Rob Greig, has said that he hopes Arup’s experience will help other organisations who are going through something similar. Arup’s East India Chair, Andy Lee, stepped down a few weeks later, but it’s really not clear whether that was connected to this scam or not. He’s never commented on that, and neither has Arup—so it may be completely separate.

TL: That’s interesting what you say about how they’re trying to sort of potentially flag this for other people and change the industry. It reminds me of our podcast on Tylenol—how, you know, sometimes there are moments where you sort of think bigger than yourselves, and how can you use a crisis to help other people.

KH: Absolutely—and also a realisation that this is going to get linked to them forever more, really. So you might as well lean into that.

TL: But it’s not the only attempt at this, either, because I know scammers tried to do something similar to WPP, didn’t they? There was a scam that impersonated CEO Mark Read, asking another WPP executive to support him in setting up a new company.

But luckily, that one was spotted and closed down.

KH: There have been lots of voice deepfakes, but not that many video ones. I think that’s why this one, and the WPP one, stand out so much. But to be honest, it’s only a matter of time. I mean, this technology is getting better all the time, it’s really readily available, and it’s pretty easy to do, I think.

TL: And it’s so worrying for businesses. I know that the World Economic Forum’s report in January 2024 showed that the top concern for global business executives over the next two years is the rise of misinformation and disinformation, including deepfakes. It just feels like a really new thing, but there’s been a lot leading up to this moment.

I know from my social media experience over the years that there have been so many apps and filters that let us play around with fakes—things like face swaps and apps to make you look older or younger, and Snapchat filters. All that kind of thing, which I guess must have gradually contributed to the development of this tech.

KH: I mean, it must have done, mustn’t it? Because they’re all really deepfakes in a way—or shallow fakes. I don’t know what you want to call them, but it’s definitely not a new thing. I think it is worth digging into the history of it a little bit, because even back in the 80s, people were slicing together videos or audio files.

There was a really famous one that faked a telephone call between Reagan and Thatcher. It appeared to show Reagan talking about a nuclear conflict and Thatcher saying she deliberately escalated the Falklands War. It was quite a famous fake, and it eventually came to light that the recording was made by a punk band from Essex. But for a while, it created a really big issue because the US government thought it was propaganda from what was then the Soviet Union—so it became a sort of international incident.

That kind of technology has obviously got much more sophisticated, and we’ve gone beyond just, you know, old-fashioned phone calls. All the effects are really common now. As recently as 2020, there was so-called “new music” recreated on OpenAI that sounded as though it was by artists like Elvis, Frank Sinatra and Jay-Z. Jay-Z actually sued YouTube to get it taken down.

And of course, think about all the banks that use voice recognition systems—they’re having to guard against this stuff all the time. Audio fakes are designed to get around banking systems. This is designed to be cybercrime and to get people’s money. But I think the difference now is that it’s really, really easy to do.

Since, I think, 2017, there’s been a sort of tipping point. And of course, it started as everything on the web does—it started with porn. There’s been a really worrying rise in celebrities’ faces being superimposed onto pornographic images and videos. That’s really worrying for people too.

And of course, there’s loads of this stuff around elections.

TL: I mean, you know me, I’m an optimist. Sometimes brands use it for good reasons too. I remember an ad with David Beckham where he was speaking nine different languages, for example. There have been some amazing campaigns that use this sort of deepfake technology—supposedly for good.

KH: Yes, and back in my PR day—so, back in 2009—I was involved with a campaign that mimicked Bob Monkhouse, if you remember who that was, talking about his own death from prostate cancer to raise awareness of prostate cancer. That was done with his family’s full permission, of course. But it was really difficult to mimic that at the time—it was really, really clever technology.

Wasn’t there a Taylor Swift one as well?

TL: Yes, there was—one of my favourites, actually. There were ads that went all across Facebook, Instagram and TikTok that showed Taylor Swift endorsing cookware. Apparently, she loves Le Creuset—so, who doesn’t?

KH: Exactly—who doesn’t? Apparently, on her TikTok and Instagram posts, if she’s in her kitchen, you often see Le Creuset products. So it made sense that she might be an ambassador for the brand, or be promoting it. But actually, she’s not an official ambassador—and it wasn’t really her in the ads.

It was a fake, but someone had paid for those ads. These were paid-for campaigns.

TL: Yes—and it wasn’t connected to Le Creuset either, just to make that clear.

KH: No, absolutely it wasn’t. The ads asked people to click on a link for a giveaway for the pan, and of course, that was a scam. It took people to a fake site that looked like the Food Network, and then it said to consumers, “All you have to pay is $9.99 for a shipping fee.” But of course, as you said, it had nothing to do with Swift or Le Creuset.

TL: So let’s try and be a bit helpful to our wonderful listeners. What are the signs of a deepfake?

KH: The World Economic Forum’s website has some really good advice on this, and it’s worth checking in with them regularly, because they do update it all the time. For video, the MIT Media Lab—referenced on the World Economic Forum site—says you should look out for a few things.

For example, if someone’s blinking in a way that just seems a bit weird, that’s a sign to look out for. You also need to look out for whether their lips are moving naturally, or if they have odd reflections in their eyes or glasses.

TL: I’m just checking.

KH: For example, do they have different reflections in each eye? Those sorts of things happen on deepfakes. Also, does the age of the skin match their eyes and hair? Or are there any shadows on their face that just don’t look right?

TL: But I guess those sorts of things could be quite hard to tell with some filters. You know, some video call filters have “make me look perfect” kinds of filters on them, don’t they? Even things like Zoom.

KH: Yes, so I guess it could be quite hard.

TL: This is a kind of good shout-out for authentic leadership and people knowing each other. You know, I guess it comes down to whether what you’re hearing or seeing just doesn’t add up—or if the person behaves in a way you don’t expect.

I’m thinking back to—obviously, it wasn’t an audio or video deepfake—but I’ve had people try to scam my financial director into moving money around. They flagged it and said, “The person was so rude, I knew immediately it wasn’t you, Tamara,” which is quite a nice testimony.

KH: You literally are the most polite person I’ve ever met, so I can totally see that. But I imagine, you know, if I suddenly came onto a call like this looking like I’ve had Botox, for example—then I haven’t, it’s not me! Those are the sorts of things you’ve got to look out for.

Do they look different from normal? Are people trying to move conversations off a usual platform? If you normally meet on Teams, is someone suddenly asking you to meet on a different video platform? Does their background look off? Do they constantly refuse to put video on? Those are the sorts of things—anything that seems off—we should be questioning.

TL: And we’ve come a long way from, you know, looking for images with extra fingers on a hand, or spotting that the Pope wouldn’t wear a silver puffer jacket, for example. Are there actually tools that can help us?

KH: Yes, there are. There are loads of tools in development or on the market, but nothing is 100% accurate yet. Just in the last few days, at the time of recording, I’ve seen a mobile app from a company called Hire, for example, that says it spots deepfake audio on a mobile phone.

So there’s a lot in development. But I was doing some training yesterday in crisis comms for comms and PR people, in-house and agency, and one of the questions they asked was: how do you know the people developing the apps aren’t the same people creating the deepfakes themselves? That blew my mind.

For the record, Hire definitely isn’t one of those companies. But it is really hard to know—how do you know who the good people are and who the bad people are? So I think we just need to be much more aware of this stuff generally.

TL: I feel like we’re at the beginning of this journey, and it’s going to get worse before it gets better. We’re going to see more of this. And of course, the legal side is really important too—understanding what you can do if you’re the victim of a deepfake scam, how you should approach it.

And we’re actually going to go and dig into that with our guest after this short break.

Break

TL: We’re delighted to have Emma Woollcott with us. Emma is the Head of the Reputation Protection and Crisis Management practice at law firm Mishcon de Reya, where she advises business leaders and high-profile individuals on areas like defamation, breach of confidence, misuse of private information, harassment and data protection concerns.

She particularly specialises in digital technology and has done a lot of work in the area of deepfakes. Thank you, Emma, for being with us.

Emma Woollcott: My absolute pleasure. Thanks for having me.

TL: So Emma, this is such a fascinating area. My big question is: could this happen to you? If you’re a victim of a deepfake, what should you do?

EW: It certainly could happen to you. Our sense is that this sort of technology is becoming more accessible, more sophisticated, and this type of fraud is becoming more prevalent. Deepfake fraud is an old crime in a new form.

Fraud makes up 40% of crime in the UK, and financial crime is big business. Deepfake technology is a technologically enhanced way of social engineering—using the tech to pose as people you know, to lower your guard and gain your trust. From there, criminals gain access to parts of the business they wouldn’t ordinarily be able to reach.

TL: So is there a way to mitigate the risk?

EW: I think it’s about awareness and training. Businesses should really consider where their vulnerabilities are. If you think about the psychology and how fraud tends to work, it often exploits things like hierarchy.

Fraudsters will often pretend to be senior executives, targeting more junior staff and giving them instructions. They’ll use urgency to pressure people into acting quickly—making decisions without checking, and bypassing the usual controls. They’ll also exploit confidentiality, telling junior employees, “I need you to do this. Don’t tell anyone. It’s a special project.”

So looking at areas of the business that are potentially vulnerable—and thinking about how fraudsters exploit psychology—is a good way to focus your training.

We’ve seen that deepfake fraud attacks have often targeted newly acquired subsidiaries, which might not be fully across the company’s systems yet. Or they go for overseas subsidiaries, where staff are further from the core operations, and where time differences or language barriers make it harder to double-check identities.

Think about how your business operates—especially how money flows and how authorisations are obtained. For example, if someone joined this call with a real-time face morph and presented themselves as Tamara, who would act on her instructions? And what can you do to mitigate that?

We were doing some resilience work with a business recently, and they talked about using two-factor authentication and having two approvals for transactions. Rather sensibly, they decided the second level of authentication should come from someone not at the top of the business—someone who isn’t in the public eye, whose image and voice aren’t publicly available and therefore less likely to be cloned by fraudsters.

TL: What’s fascinating about that, for me, is that it touches on the area of authentic leadership—which we talk about a lot. If people know who the leaders are, what they stand for, how they present themselves—maybe that could be a way of minimising the risk.

EW: I think that’s right. Leadership is a big factor in this sphere. I know you both talk a lot about culture and openness in an organisation, and it’s one of the ways you can mitigate against the impact of this sort of fraud. Maybe not the risk of it happening, but the risk of it having devastating consequences—by encouraging people to be quick to challenge and question.

They need to feel free to say, “Thanks, Tamara, for that instruction to send all our money away—I’m just going to double-check it.” And also to be confident that if they feel they’ve made a mistake, they can escalate it quickly. Our fraud lawyers and our incident response team always talk about the “golden 24 hours.” It’s really important after an incident to act swiftly, and if the culture doesn’t encourage transparency or candour, there’s a risk employees might try to cover something up or fix an issue themselves—when actually the organisation should be focused on stopping the money moving.

KH: I’m really interested in the fact that we’re not just talking about how to spot a deepfake—like looking for six fingers or shadows, as we discussed earlier—but actually about how to challenge it. The implication being that you soon won’t be able to spot a deepfake. You just need to be able to challenge it.

EW: We’re nearly there. The technology is terrifying and it’s advancing at pace. The deception in the Arup case was nearly a year ago—I think it was February 2024—and my understanding is that those were pre-recorded videos.

Now there is software available that can create real-time face morphs. It depends on how much audio is available of the person and how sophisticated the deepfake can be. But I joined a call with one of the forensic providers in this space, and my managing partner was supposedly on the call.

I’m pretty sceptical, and I’m used to this stuff, but I thought to myself, “It’s odd that he’s on this—he’s quite busy—but it’s an important call.” He didn’t say very much, but he moved and he spoke. Then they switched the face, in real time, to a man of a different ethnicity and different accent—right in front of my eyes.

The technology is now so advanced that we’re getting very close to a situation where humans won’t be able to detect it. There are people working on AI tools to test authenticity in real time. I think soon you’ll be able to buy plugins for video calls that test whether the person is really there. So where there is real vulnerability—say, in your finance function—you could have those tools running in the background.

At the moment, you’re right, we talk about reflections in sunglasses or checking if something looks “off.” One thing we’ve been discussing with businesses, in terms of planning and mitigation, is that fraudsters will often pretend to be someone you know—that helps build trust.

So if you don’t know the person well enough to reference shared conversations, for example, it’s okay to say, “Sorry to challenge you, but could I please see your work pass?” Ask for something you know they should have with them.

KH: This is just amazing. Should we all have—I’m just thinking—if we’re on podcasts, or if videos of us are out there, or leaders especially… should they all have a deepfake strategy to deal with this stuff the moment they’re in the public eye? Should that go hand-in-hand with a publicity strategy?

EW: I think for a fraud to be effective, it has to be quite targeted and quite sophisticated. There’s a lot of money in fraud—especially APP fraud, which is authorised push payment fraud. That’s when someone is manipulated into making a transaction, as in the Arup case.

It’s worth fraudsters investing time to find the right people, understand the systems, and make a targeted attack. It’s unlikely, I think, at this stage, that if you join a podcast someone would have the incentive to pose as you—because no one on this call is going to be asked to transfer millions of pounds.

TL: The bit that worries me is… what if the deepfake version of me is better than me?

EW: That is a real worry. I wonder, you know…

If you’d asked me even six months ago whether deepfake was just a buzzword, a noise being talked about, and whether there was really anything to worry about, I probably would’ve said: maybe down the line. I was very concerned about the amount of deepfake pornography, and we can talk about that separately.

But deepfake fraud? Even six months ago, I might’ve dismissed it as being a bit futuristic. Now, I think it’s widespread—it’s happening. The difficulty is that you don’t really know—it’s not something that manifests quickly as a comms problem.

We had a spate of ransomware attacks, where businesses found their websites frozen and wouldn’t be released unless a ransom was paid. Those events often become public—you try to manage them and reassure customers. But with APP fraud—authorised push payment fraud—someone gets tricked into paying money, and there’s no publicity.

The money is transferred, often through several accounts in quick succession, and communications are mostly between banks trying to trace and stop it. Unless there’s a regulator involved or a legal requirement to notify, it doesn’t usually become public. So it doesn’t often become a comms issue—and that means we don’t really know how prevalent it is.

But we’re seeing many businesses thinking about it and responding to concerns. My view is always: overreact to even weak signals. Have the training in place and the processes to slow payments down if anyone is concerned.

That training should focus particularly on overseas subsidiaries or junior members of the finance team. Help them understand not only that authentication procedures matter, but that this is a real issue, and they need to know exactly who to contact if they have concerns.

Those golden 24 hours can disappear quickly if someone goes home thinking, “That was a bit of an odd request to send £20 million—but I’ll think it through in the morning.” You want them to know who the first responders are, and to be empowered to act immediately.

Often, you don’t lose much by slowing some payments down. You can check with the bank that it’s gone where it should. The banks have a sophisticated interbank messaging system—if there’s a suspicious transaction, your fraud team can flag it, and the bank will send a message via the SWIFT system to the receiving bank to hold it.

That helps prevent it from being transferred again to a less accessible jurisdiction.

KH: And again, that all comes down to people reacting quickly, doesn’t it? That culture—someone being able to say, “I’ve just done this, and I now feel really uncomfortable about it.”

Ideally, they wouldn’t have done it in the first place—there would’ve been that pause—but at least if they’ve acted, they come to you quickly.

KH: Can we talk a bit about deepfake pornography? I know we’re mainly focusing on fraud, but I’m really interested in this, because it’s such a huge area at the moment. What’s the law around deepfake porn—and how does that compare to the laws around fraud? Are they the same or different?

EW: They’re different—and it’s all a bit piecemeal, really.

With fraud, fraud is a crime, and using deepfakes to encourage people to transfer money is just a new way of committing an age-old offence. There are both civil and criminal remedies. If you think you’ve been targeted by APP fraud, you should report it to Action Fraud, the police, or the National Fraud and Cyber Crime Reporting Centre.

When it comes to deepfake pornography and manipulated images, it’s much more difficult legally. There have been private members’ bills put before the House of Lords, and now the government is trying to introduce a new offence for procuring and sharing manipulated images.

You may remember the Online Safety Act—it criminalised the sharing of intimate images, but only genuine ones. At the time it passed through Parliament, they hadn’t really thought about how pernicious and intrusive the sharing of manipulated images would be. So the deepfake element wasn’t part of that legislation.

We’re now trying to plug that gap. At Mishcon, we’ve been working with Baroness Owen in the House of Lords to put pressure on the government to improve the law and address that gap—and it does look like something the government is picking up.

TL: Emma, I know this is still a developing area, but can I ask—do we have any sense of the motivations behind deepfakes? Is it about money? Disruption? Revenge?

EW: It’s all of those things.

Financial fraud is clearly financially motivated. You can make a lot of money quite quickly by targeting the right person and tricking them into transferring large sums. Once the money is moved, it’s hard to trace.

What some people refer to as “revenge porn”—using deepfake technology to create explicit content—is often used to shame and humiliate, usually women and girls. So the motivation there is revenge or sexual gratification.

What I find most unsettling is the incentive people now have to create new apps that make it easier and easier to undress people. Deepfake porn started with famous women on websites like MrDeepFakes.com—which didn’t even rank on Google and had to be sought out. But now, there are apps that schoolchildren can download to undress their peers.

The technology is so sophisticated that even ordinary women are finding professional photos being manipulated into explicit images—and those are being shared. So the motivation is definitely about sex, revenge, personal gratification.

That’s why it’s so important the criminal law catches up—so we can properly dissuade those making or procuring these manipulated images. It’s not enough to go after the people creating the tech—many of them are in jurisdictions outside our reach. The individuals here, who are using it, need to feel the full force of the law.

TL: Thank you so much for what you’re doing in this area. I feel like everyone needs to update their risk register—this is so important. Thank you, Emma, for being with us.

Outro

You’ve been listening to “What Just Happened?” with Kate Hartley and Tamara Littleton. If you enjoyed the podcast, please subscribe, rate, and review.