It’s a cat-and-mouse game, though, and seems that as soon as we learn one method for detecting deepfakes, the next generation fixes the flaw. To fight that, are there any reliable solutions for figuring out which videos are trying to trick us?

Visual clues

Artifacts aren’t just things Indiana Jones puts in museums — they’re also little aberrations left behind after an image or video has been manipulated. In early deepfakes, these could often be caught with the human eye and bad deepfakes may still have a few warning signs, like blurring around edges, an oversmoothed face, double eyebrows, glitches, or a generally “unnatural” feel to how the face fits.

For the most part, though, the techniques have now improved to the point where these artifacts are only visible to other algorithms combing through the video data and examining things on the pixel level.  Some of them can get pretty creative, like one technique that checks to see if the direction of the nose matches the direction of the face. The difference is too subtle for humans to pick up on, but machines turn out to be pretty great at it.

Biometric clues

For a little while it seemed like the key to unmasking deepfakes was their lack of natural blinking patterns thanks to the relative scarcity of “eyes-closed” source images. It didn’t take long for the next generation of deepfake technology to incorporate better blinking, though, quickly reducing that technique’s effectiveness.

Other biometric indicators haven’t been completely cracked yet, though, like individual quirks that algorithms can’t easily automate into a deepfake because they require some contextual understanding of the language being used. Little habits like blinking rapidly when you’re surprised or raising your eyebrows when you ask a question, can be picked up and used by a deepfake, but not necessarily at the right times since they can’t (yet) automatically figure out when to deploy those movements. AI being able to read heartbeats using video images has plenty of applications beyond deepfake detection, but looking for the periodic movements and color changes that signal heart rate can help identify AI-generated imposters. The most obvious giveaway is when a deepfake has no heartbeat at all, but deepfakes often do have pulses. Even so, irregularities (like different parts of the face displaying different heart rates) can still help identify a deepfake.

AI projects

A lot of big names are very interested in solving the deepfake problem. Facebook, Google, MIT, Oxford, Berkeley, and plenty of other startups and researchers are tackling this problem by training artificial intelligence to spot faked videos using the methods listed above, among others.

One thing both Facebook and Google are working on is creating a dataset of high-quality videos of actors doing things, which they then use to create deepfakes. AI trained on these can then figure out what the telltale signs of deepfakes are and be tasked with detecting them. Of course, this only works as long as the researchers continue to generate deepfakes using the most up-to-date technology, meaning there will always be a bit of lag between the newest deepfake tricks being discovered and these algorithms being able to catch them. With any luck, though, the experiments using actual mice to identify deepfakes will pan out and give us an edge.

Authentication

Detection technologies aren’t the complete answer to deepfakes, though, as they’ll probably never have a 100% success rate. Deepfakes that have had some time and money put into them could probably pass a lot of sniff tests and current AI methods. And let’s remember how the Internet works: even if these fakes are caught, they’ll likely be recirculated and believed by some subset of people anyway. That’s why it’s also important to have some form of verification mechanism – some proof of which video is the original or something that can indicate if a video has been modified. That’s what companies like Factom, Ambervideo, and Axiom are doing by encoding data about videos onto immutable blockchains. The basic idea behind a lot of these projects is that the data contained in a video file or generated by a certain camera can be used to generate a unique signature that will change if the video is tampered with. Eventually, videos uploaded to social media might come with an option to generate an authentication code that the original uploader could register on a blockchain to prove that they were the original video owners. These solutions have their own set of problems, of course, like video encodings changing the data in the file and altering the signature without the video content actually changing, or legitimate video editing messing up the signature. In high-stakes situations, though, such as commercial transactions where images are used to verify delivery or get investor support, having an authentication layer like this could help prevent deepfake-related fraud.

Are deepfakes more dangerous than Photoshop?

At this point, we all just assume that images might not be real because we’re fully aware that the technology exists to make almost anything look real in a still image. Eventually, we may start to approach videos with the same sort of skepticism as faking them becomes as easy and convincing as Photoshop currently makes image editing. Even with general awareness, though, it’s easy to imagine plenty of real-life incidents getting their start with a well-timed, high-quality deepfake in the not-too-distant future. Image credits: Google/FaceForensics, Facebook AI, Ambervideo