fbpx

Using Blockchain and AI to Fight “Deepfakes”

This blog post was originally published by Bitfury. It is reprinted here with the permission of Bitfury.

The internet’s democratization of information is one of the most significant societal shifts of the last millennium. It has enabled us to move from a one-way centralized system of information sharing, where we depended on our governments and large media outlets for news, to a decentralized world where anyone can be a journalist and information is more widely available.

This shift, however, has not been without social cost. When anyone can share information, it is often difficult to know if the information is accurate. Boundaries between facts and opinions have continued to fade and have been exploited to influence everything from national elections to individual decisions. Today, “fake news” is as ordinary in our lives as newspapers were to our grandparents — but, unlike newspapers, networks can spread these fake ideas in seconds across the world. This effect is especially pernicious with “deepfakes,” or images and videos that are incredibly realistic but are, in fact, computer-generated.

Deepfake technology was borne out of a legitimate effort to train artificial intelligence networks. Researchers were using multiple approaches to create images and content, but the end results were poor with computer-generated faces which tended to be blurry or missing body parts of the body.

Ian Goodfellow, a PhD student in machine learning at Montreal University, came up with a brilliant solution. He proposed the creation of a reinforcement loop — a computer running a generative algorithm that would only create “fake” content. That fake content would appear alongside real content and would be used to “test” a second algorithm on whether the content was real or fake. This adversarial approach would allow the second algorithm to be trained much faster and the results could be used to train the first algorithm to create even more realistic content. This is how “Generative Adversarial Networks,” or GANs, was born.

The technology took off — and gave a “digital imagination” to machines that could help them generate large amounts of inexpensive but lifelike content. This content is absolutely necessary for training neural networks — after all, most AI programs need a lot of data to achieve the necessary accuracy. For instance, an AI-enabled diagnostic tool can only detect heart disease if it knows what heart disease looks like. GANs are showing promising results in this sector, where privacy reasons considerations limit data accessibility. GANs are also being used to generate real-world videos of scenarios for training autonomous cars (meaning Tesla and other companies do not have to actually go out and drive an autonomous car in the real world for its training — a far safer and eco-friendlier alternative).

But as the machine “digital imagination” became more advanced, it became harder to tell if the content it created was computer-generated or real. The term “deepfake” entered the conversation in 2017 on Reddit when a user named “deepfakes” used this technology to create fake videos. One famous example is the “photoshopping” of actor Nicolas Cage’s face into several well-known movies. To the untrained eye, these “swaps” looked authentic.

Nowadays, the technology has advanced to not even require “swapping” — you can generate a hyper-realistic animation of Mark Zuckerberg, for example, or Barack Obama — and have them say whatever you want, without any inputs. The quality of these deepfakes is improving rapidly and is available completely open source. Tech companies are starting to recognize the threat this technology poses to their networks — last September Google released a large dataset of deepfake videos to help develop detection systems, while Facebook launched a “deepfake detection challenge” with a $10 million prize.

Governments are also on alert. I recently attended a workshop organized by the AI and Robotics division of the United Nations Criminal Investigation division focused entirely on the problem of deepfakes. Several attendees expressed concerns that deepfakes pose a serious threat to democracies and legitimate governments across the world. The event began with a sobering discussion about the advancement of the technology. The intrinsic nature of the technology and its feedback loop means that the better an algorithm can detect fake content, the stronger and more realistic the content becomes. There are, however, several approaches being used now to detect these “deepfakes.”

The first is anthropometric consistency. This is a machine learning technique that measures selected points in a person’s face, including the movement of the mouth, lips and nose to see whether it is consistent with real-life data. Many deepfakes, while advanced, still have distorted facial movements that belie computer generated movement — these are the deepfakes that often seem realistic, but “awkward.” UC Berkeley has shared promising results from this method — by learning the subtle characteristics of how famous people speak and move, it becomes easier to detect fake videos.

Another approach is data quality technique. It analyzes the digital spectrum of an image including its controlling colors, gamma, saturation and evolution in the photo frames to spot any inconsistencies.

Metadata analytics is a similar technique that checks whether the metadata in a picture (including the camera model, data, location, and other information) has been manipulated.

These approaches, used either separately or together, can give us a good estimation of the authenticity of an image of video, and will eventually result in a suite of digital forensics tools that can automatically unmask deepfake content. But once the content is labeled as fake, it needs to be labeled as such (no matter where the content appears).

This is where blockchain can play a key role. A digital signature can be added to the deepfake data and its metadata, which can be published to a blockchain (public or private) that anyone can access to check the validity of the content. This would make the information immutable and immediately auditable by anyone (from governments to individuals). The same approach could be implemented to validate original contents directly at the source as well — a phone or digital camera can use a simple app to digitally sign its contents immediately after creation (verifying its authenticity). Web browsers could include “deepfake detection” plugins that scrape the web and the blockchain for known deepfake content and inform you if deepfake content is being used on your social media feed or ends up in your inbox.

Unfortunately, despite these possibilities, global efforts to fight deepfake content are far behind the efforts that advance it. Public authorities, governments, social media giants and scientists should work together to fight this threat — making sure that while our access to information remains decentralized, we all have tools at our disposal to ensure the information we see is real.

Fabrizio Del Maffeo
Head of Artificial Intelligence, Bitfury

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone
Phone: +1 (925) 954-1411
Scroll to Top