The Rise of the Deepfake: Is Synthetic ID Fraud Unstoppable?

By Tim McCarthy, Imperium General Manager

While exponential developments in Artificial Intelligence (AI) and Machine learning (ML) technology are creating fresh opportunities for advancement in almost every field of human endeavor, there are also dark forces at play.

A recent study by a prestigious British university rated so-called deepfakes (hyper-realistic audio and video trickery) as the ‘most worrying application of artificial intelligence’ because of its potential to further the aims of those involved in crime and terrorism. The report’s authors were concerned not only by the difficulties in detecting fake content but by the societal harm that could result from a universal distrust of media. Simply put, if everything can be faked, how do we know what’s real?

The hand-wringing over deepfake technology may feel like a million miles away from everyday life. Many of us will only ever encounter it as a kind of cinematic smoke and mirrors used to reverse the aging process on movie stars. You might even dabble with it yourself via one of the deepfake apps that allow people to insert themselves into pop videos or movie scenes.

But the underlying tech has serious implications for us all. At Imperium, we spend a lot of time anticipating the impact of increasingly sophisticated technology – like deepfakes – on ID fraud. We know that, globally, synthetic identity fraud is on the rise. According to McKinsey, it’s the fastest-growing type of financial crime in the U.S.

A synthetic ID can be created using a combination of real and fake information, some of which may be stolen or hacked from a database or even acquired on the dark web. If the activity only has a limited impact on the victim’s PII (personally identifying information), it could go unnoticed for months. Adding deepfake and synthetic AI tech into the mix just makes it easier for the perpetrators to build up more detailed synthetic IDs that lead to more serious levels of fraud.

It’s a long game. Small credit applications at the beginning attract little attention and allow fraudsters to establish what’s known as a ‘thin-file’ that can be used to move up to more substantial targets. It can be an expensive business. For our panel and survey clients, fraudsters not only leverage a significant financial cost on their organizations but also have the potential to chip away at the trusted brand they’ve worked so hard to build.

It’s not all doom and gloom. Accurately verifying applicants’ IDs during onboarding is critical – whether you’re vetting survey respondents, checking a loan application or verifying ballot requests. The good news is that the same AI that’s being used to create deepfakes, can be employed to spot synthetic IDs.

It’s a process that requires continuous monitoring and development, however. To create a robust system, multiple layers of security – that go over and above standard two-factor and knowledge-based authentication – will be required. These systems should incorporate both manual and technological data analysis, and deploy tactics including screening for multiple users with the same SSN or with accounts that originate from the same IP address, performing link analysis to check for PII overlaps, and comparing applicants’ behavior against ML-enabled algorithms.

There is, potentially, another weak point to consider: Synthetic IDs are sometimes too perfect. Real people have real histories that form a patchwork of physical and digital evidence in dozens of different data systems. Natural data trails are hard to fake because their reach stretches back years and include student loan details, old addresses and property records. Synthetic IDs either include some real details that don’t map with other records or are completely fake and have data that’s a little too neat to be true.

While tracking and trapping fraudsters will increasingly require the use of more sophisticated anti-fraud systems, it’s important to balance the requirement for greater transparency with the individual’s right to privacy. Which is why it’s crucial to partner with high-caliber ID verification providers who can implement world-class solutions, at the same time complying with the most stringent GDPR and CCPA regulations on information processing, storage and protection.

The importance of using AI ethically – particularly to support uses related to fairness, bias and non-discrimination – is a discussion-worthy topic on its own. But it’s my belief that we can – and should – push back against AI-based fraud with an equally powerful AI-enabled fraud-detection response.

With more and more companies and consumers interacting online, and where criminal transgressions can damage lives and destroy reputations, it’s our duty to mount a determined and effective defense of our data.

 

Tim McCarthy is General Manager at Imperium. He has over 15 years of experience managing market research and data collection services and is an expert in survey programming software, data analysis and data quality. Imperium is the foremost provider of technology services and customized solutions to panel and survey organizations, verifying personal information and restricting fraudulent online activities.

Back To News