Rapidly evolving technology platforms bring growing conveniences and synchronization to many walks of life and business. But as those platforms grow in sophistication, so does the creativity of would-be fraudsters who aim to take advantage of both businesses and consumers. The realm of so-called “Deepfakes” – artificially created images, video and audio designed to emulate real human characteristics – has in recent years garnered widespread attention and is area of growing concern.

Bad actors can leverage the technology behind deepfakes to commit identity theft, blackmail, and fraud. These artificial pieces of content rely on the use of advanced analytics, including neural networks, computer systems that recognize patterns in data as well as various machine learning techniques. Developing a deepfake photo or video typically involves feeding hundreds of thousands of images into the neural network, “training” it via artificial intelligence to identify and reconstruct face or voice patterns. With the increased adoption of better AI by the fraudsters, the number of images or videos required to train the AI is substantially reduced, making it easier for fraudsters to use these tools at scale.

They can also use them to attack more fundamental aspects of our collective culture – undermining society more broadly through the spread of disinformation.

Visual and audio communications can be powerful. When they closely resemble real individuals, or sources of news and trust, misinformation they spread can undermine public discourse by casting doubt on news, government, or influential voices on social media. Deepfakes can be incredibly affective at spreading inaccurate messages, because they can proliferate far and wide at a rapid pace. Bolstered by algorithms that interact with pre-existing cognitive biases, misinformation can have quick impact.

From doctored imagery and audio of politicians that can impact election outcomes, to misinformation ostensibly from public agencies, deepfakes can stir confusion and fear. Coming ostensibly from business leaders, they can also throw off consumer engagement – shaking trust and undercutting messages or product offerings those entities would rather we focus on.

There are however tools and best practices that can thwart such efforts. Most important and most constant is vigilance: fraudsters are relentless and always at work, looking to take advantage of every loophole or weak spot. First is the state of deepfake videos themselves: at this stage, their quality is still such that they can often be recognized if you know what to look for. A few telltale signs: jerky movement, shifts in lighting from one frame to the next, shifts in skin tone, strange blinking (or no blinking at all), and/or poor lip synch with the subject’s speech.

Some emerging technologies are also helping video makers authenticate their videos. For example, a cryptographic algorithm can be used to insert hashes at set intervals during the video. If the video in question is altered, the hashes will change.

Good security procedures can also go a long way to thwart would-be fraudsters. As an emerging threat, deep fakes thrive on the reality that there is a great deal of technology at fraudsters’ disposal, especially machine learning and advanced analytics. Businesses can therefore fight fire with fire – leveraging these same capabilities (machine learning and advanced analytics, for example) to fight these attacks, as Experian already does. A layered strategy of defenses is also key, particularly as it relates to how fraudsters may try to distribute or deploy these deepfakes. The threat landscape is constantly evolving, so there’s arguably no more important point of focus than guarding the front door, the point of access to the environments which fraudsters hope to leverage to distribute their deepfake content.

As the sophistication and proliferation of deepfakes continue to evolve, so too will our measures to keep vigilant and counter such nefarious uses of technology. While the nuances of this technology will continue to shift, core best practices should not. With awareness and vigilance, consumers and businesses alike can stay one step ahead of the deep fake traps that lie in wait.

About the Author:  David Britton, VP of Strategy, Global ID & Fraud at Experian

Source:  Experian