Published : 14 hours ago, on
By Steve Bradford, Senior Vice President EMEA at SailPoint
In 2021, a wave of deepfake videos started to emerge across the internet and social media. From humorous TikTok videos of Tom Cruise, to unsettling speeches from Morgan Freeman explaining synthetic reality, AI-driven deepfakes started to capture the attention of many internet users.
Over the past year, as AI has bled into the mainstream, technology that was once reserved for the experts has fallen into the hands of the everyday internet user. Now, whilst this has led to the development of some funny celebrity parodies across social media and even the development of the TV show ‘Deep Fake Neighbour Wars’, it has also opened the door to some very real, sci-fi-like threats.
Like many types of initially innocent technologies, deepfakes are now being exploited by malicious cyber criminals for nefarious means, with one of the latest victims being the world’s longest serving Central Bank Chief. Earlier this year, video and audio clips of the governor of the National Bank of Romania, Mugur Isarescu, were used to create a deepfake scam encouraging people to invest in a scam. Whilst the bank of Romania issued a warning that it was neither the governor or the bank behind the investment recommendations, it calls into question the severity of deepfake threats, especially for financial services, where organisations and customers could pay a high price as a result of disinformation.
With deepfake incidents in the fintech sector increasing 700% in 2023 from the previous year, let’s explore how financial services institutions can navigate these choppy waters to prevent against AI-enabled impersonation scams.
The threat facing financial services
Unfortunately, the financial services industry is notoriously fertile ground for cyber-attacks. It’s a high target given the monetary gain for fraudsters, vast amounts of sensitive personal information, and the opportunity to deceive and manipulate customers, who put so much trust in financial institutions like banks.
It’s no wonder, then, that these types of impersonation scams are gaining traction across the UK among other countries. Just last summer, trusted consumer finance expert Martin Lewis fell victim to a deepfake video scam in which his computer-generated twin encouraged viewers to back a bogus investment project.
These types of attack are growing in prevalence. We’ve already seen a finance worker pay out $25 million after a video call with their deepfake CFO. Deepfakes could even be used to fraudulently open bank accounts and apply for credit cards. The danger and damage of deepfake scams are far ranging and banks cannot afford to sit still.
To combat this growing threat, the financial services industry needs stronger authentication than seeing and hearing. It’s not enough for financial experts or customers to simply trust their senses, especially over a video call where fraudsters will often utilise platforms with poorer video quality as part of the deceit. We need something more authoritative, along with additional checks. Enter identity security.
Distinguishing reality from the deepfake
To protect against deepfake threats, businesses need to batten down the hatches on their organisation. Increased training for staff on how to spot a deepfake is essential. So is managing access for all workers – employees but also third parties like partners, contractors. Organisations must ensure these identities only have as much access as their roles and responsibilities allow. No more, no less, so if a breach does occur, it is limited from spreading throughout the organisation. Data minimisation—collecting only what is necessary and sufficient—is also essential.
Stronger forms of digital identity security can also help prevent against an attack being successful. For instance, verifiable credentials, a form of identity that is a cryptographically signed proof that someone is who they say they are, could be used to “prove” someone’s identity rather than relying on sight and sound. In the event of a deepfake scam, proof could then be provided to ensure that the person in question is actually who they say they are.
Some emerging security tools now even leverage AI to defend against deepfakes, with the technology able to learn, spot, and proactively highlight the signs of fake video and audio to successfully thwart potential breaches. Overall, we’ve seen that businesses using AI and machine learning tools, along with SaaS and automation, scale as much as 30% faster and get more value for their security investment through increased capabilities.
Building a security backbone through AI-enabled identity security
As the battle rages against AI-enabled threats, the war goes far beyond deepfakes. Bad actors are leveraging AI technology to create more realistic phishing emails, masquerading as official bank sites to trick consumers and fuel the rapid dissemination of malware techniques.
Adhering to regulatory standards is of upmost importance to navigate this complex threat landscape. But this should be considered the baseline when it comes to enhancing security practices. To ensure businesses are best prepared to combat bad actors, regulation needs to be met with robust technology like AI-enabled identity security. Through this, organisations can scale their security programmes whilst gaining visibility and insights over their applications and data.
In today’s digital age, organisations cannot compete securely without AI. The reality is that cyber criminals have access to the same tools and technology that businesses use. But it’s not enough for businesses to simply keep pace with criminals. Rather, businesses need to get ahead by working closely with security experts to implement the necessary tools and technology which can help combat the rise in threats.
With over 9 in 10 (93%) financial service firms facing an identity-related breach in the last two years, embedding a unified identity security programme that monitors everyone in your network will allow organisations to see, manage, control, and secure all variations of identity – employee, non-employee, bot or machine. This will help the financial services industry to know who has access to what, and why across their entire network, being vital to detect and remediate risky identity access and respond to potential threats in real-time.
Only through a combination of increased training and stronger forms of digital identity security can banks and other financial institutions start to navigate through the sea of fakes and inform their customers on how to do the same. As the pool of deception grows, investment into AI and automation to prevent against such attacks must be a priority in 2024.