It’s no surprise that we’ve seen a huge increase in fraud rates over the past year – the lack of face-to-face contact during the lockdown has made it easier for fraudsters to pass through checks than ever before. ‘identity. Not only that, but the transition to a digital economy has created a sophisticated automation infrastructure – now attackers can jump and evade authorities at any moment. As the world has become increasingly digital over the past decade, deepfakes are emerging as a huge security threat in a world already plagued by disinformation.
About the Author
Stephen Ritter is CTO at Mitek.
We’ve gotten used to seeing top personalities, from Mark Zuckerberg to Queen Elizabeth, used to raise awareness about the deepfake threat or for our entertainment. However, we are heading to a point where we will soon see technology used for more sinister purposes as it becomes more accessible to fraudsters. In the not-so-distant future, it will likely be normal people who pay the price for deepfake attacks, as scammers target the general public for financial gain. With a UCL study ranking deepfakes as one of the biggest threats we face today, it’s time to beef up our defenses and put fraudsters on the defensive.
An imminent upsurge in fraud
Billed as the 2020s answer to Photoshopping, deepfakes use artificial intelligence to replace the likeness of one person’s appearance or voice with another in recorded video. Knowledge of the technology comes from memes and fake videos shared online, but its ability to manipulate facial expressions and speech has caught the attention of scammers. High-profile examples of successful deep hacks include a 2019 attack, where cybercriminals used fake recordings to impersonate a CEO and demand the transfer of $ 243,000. Until recently, successful deepfake attacks were rare – but the pandemic ended them, opening the door for scammers.
We are now seeing an increase in the offerings of deepfake technologies and services on the dark web, where users share rogue software, best practices and how-to guides. All of this demonstrates a concerted effort in the cybercrime sphere to refine deepfake tools, which in turn points to the first signs of a new wave of looming fraud.
What’s concerning here is that deepfakes are one of those scary technologies that allow cybercriminals to attack on a massive scale. While successful attacks are not yet common, they could become rampant as this technology continues to evolve.
In the banking sector, the context of increasing branch closures coupled with the effects of the pandemic now means that a new customer could join a bank without having to go to a branch. They could, in theory, open a number of new credit cards and, after a few months of solid credit history, maximize them and disappear. These unscrupulous individuals can then do the same thing elsewhere, stealing identities en masse. By not fighting this, we can eventually get to a point where such cases are beyond our control. So how can we possibly protect people from deepfake attacks?
The new line of defense
Biometric authentication is leading the charge in the growing fight against identity fraud. Banks are already using facial biometrics, in conjunction with vividness detection, to verify faces and documents, and to ensure that scammers aren’t bypassing filtering processes with photos of a photo, for example. But as the capabilities of deepfakes continue to develop, the weapons of a fraudster’s arsenal could push them past the banks’ own systems. That’s why it’s time to add another link in the security chain and send fraudsters running – and that’s where our voices come in.
Many of us already use our voices to manage our daily lives – we ask Alexa for the weather, Siri to call mom, and our outlets to turn off the lamp – but we often associate voice technology with invasion of our privacy. rather than our protection. In fact, voice offers a powerful and convenient form of biometrics that will have a vital role to play in improving anti-fraud defenses. Where one form of biometrics presents a strong defense against potential hackers, two offer much more protection, resulting in lower fraud rates. In our experience, the combination of voice and facial biometrics makes the verification process almost impenetrable by scammers, providing four levels of protection: alertness and face and voice recognition.
Not only that, but voice biometrics can be collected from our devices easily and passively, which means that it is not difficult to engage consumers. It doesn’t take any longer, for example, if in addition to using liveliness detection to check a selfie when someone opens a new account, banks then ask them to repeat a phrase. The process can add a few seconds to the user experience, but it’s not a major hurdle to overcome. And there will always be tradeoffs to strike a balance between security and convenience.
Managing risks in the digital economy
Focusing on the right use cases is essential to making voice biometrics a form of everyday authentication. The last thing people want is to use voice in a situation where it isn’t needed. Where text-based passwords are more than capable of protecting accounts on a retailer’s website, securing our bank accounts requires more sophisticated means of authentication. However, people are unlikely to have a problem with setting up layered biometrics if it means protecting their finances, especially if it doesn’t take more than a few seconds.
Businesses will never be 100% protected, and any product that promises total security would ultimately be unusable by consumers. Instead, risk management is the name of the game. The combination of traditional security measures and layered biometrics will give us the best chance of forcing fraudsters to back down, especially in our new ‘all digital’ world. .