Why You Can’t Believe Your Eyes: Deepfakes – the Newest Threat to Identity

The rapid rise of deepfake technology has become a hot topic. Celebrities and public figures have been prominently “featured” in viral deepfake videos – from the now-famous lip synching video of former President Obama, to the recent video seamlessly morphing comedian Bill Hader into Tom Cruise, then Seth Rogan. But there is a very real threat here masked by the entertainment factor.

Deepfake activity was mostly limited to the artificial intelligence (AI) research community until late 2017, when a Reddit user who went by “Deepfakes” — a portmanteau of “deep learning” and “fake” — started posting digitally altered pornographic videos. This machine learning technique makes it possible to create audio and video of real people saying and doing things they never said or did. The fact that this tech was historically costly and restricted to a skilled population also comes into play. Today, it is relatively easy and inexpensive to create deepfakes- especially for sophisticated fraudsters. This is bad news for businesses and consumers.

Whereas video was once considered a valid and authentic method of identity proofing, the risks from the introduction of deepfake technology cannot be underestimated. If images and data fall into the wrong hands, bad actors gain the ability to impersonate just about anyone. This poses an obvious threat to the political arena or highly confidential government entities, but also any organization that onboards new customers and wants to validate the user’s identity- as well as the customer’s themselves. AI-based identity fraud is growing, and businesses are beginning to realize that elementary identity proofing is not viable if you truly want to know who you are doing business with and must meet regulations such as KYC.

The growing risks surrounding deepfakes – loss of customer trust, potential to spread misinformation, and severe security risks – underscore the role that biometrics must play in the identity verification process moving forward. It is increasingly difficult for companies to be able to guarantee that a user is genuinely present, at the moment they are attempting to validate their identity.

Fortunately, however, the very technology behind deepfakes is also the technology that can be used to spot spoofs and fraud. Using a combination of AI and biometric facial recognition, companies can securely and simply authenticate users to verify they are physically present. Liveness detection guards against fraudulent attempts to gain access to personal data and protects against attempts to bypass biometric identity verification. Fraudsters will often try to bypass the system using a doctored photo, screen image, a recording or doctored video — all of which can now be made even more lifelike with deepfake technology.

Acuant Face™ provides powerful biometric facial recognition match and liveness detection technology to prevent identity theft and fraud. Understanding that business wish to address various levels of risk, Acuant Face offers 3 classes of technology to address low to high risk transactions. For example, when using a ride share app the level of risk is lower than when conducting a high value financial transaction, opening an account or crossing a border.

In high risk environments, our Enhanced level of facial recognition uses controlled illumination to detect and thwart deepfakes, video replay and presentation attacks- even monitoring for velocity to alert when a person is making suspicious transaction in a short period of time (perhaps in different countries). A user simply captures an image of their government issued ID and then takes a selfie. The selfie image is compared to an extracted photo from the identity document such as a driver’s license or passport to verify a match. Contact us to learn more.

To learn more about Acuant’s identity solutions, schedule a demo now: