artificial intelligence, Digital Forensics
Deepfakes: Much More Than Meets The Eye (Part 2)
As social media becomes more prevalent in our day to day lives, so does the use of deepfakes. Deepfakes are very realistic look-alike videos and images that their creators use to portray a person in any manner they wish. In our last blog, “Deepfakes: Much More Than Meets The Eye”, we discussed how difficult it is to detect a deepfake, a task that is becoming ever more daunting. However, part of being able to identify a deepfake is knowing how the technology producing them works. This technology is a combination of machine learning and artificial intelligence. Within those frameworks are neural network architectures and generative adversarial networks. In this blog, we will take a closer look at each. Utilizing these two technologies, deepfakes can utilize several types of networks to produce a fake image or video.
Looking at it from a medical view, neural networks are computational systems loosely inspired by the way in which the brain processes information. Special cells called neurons are connected to each other in a complex network, allowing information to be processed and transmitted. In Computer Science, artificial neural networks are made from thousands of nodes (neurons), connected in a specific fashion. Nodes are typically arranged in layers; the way in which they are connected determines the type of the network and, ultimately, its ability to perform a certain computational task over another one.
When images or videos are run through a neural network, there are typically three input nodes for each pixel, an area of illumination which an image is composed of. These input nodes are made up of the proper amount of red, green and blue needed to create the image. For image-based applications, the most effective type of network is a convolutional neural network, which is the primary network that Deep Fake applications are using. As technology advances, so does the quality of the images neural networks produce. This is done by the network learning to produce results that are desirable. The way it learns is by finding the proper amount red, green and blue that is required to create more convincing images on a continual basis. One way the network tests this and learns is by a process called backpropagation.
The objective of a general adversarial networks (GANs) is to create something new which is based off previously analyzed data. In contrast, to a neural network, a GAN is comprised of two neural networks that compete against each other to learn from the other, which generates better results. The two parts of the GAN are called the generator and discriminator. The job of the generator is to create a new image based on the knowledge that the neural network has been taught from a dataset. Once this data in ingested, the discriminator determines whether the image is real or fake. These two components stay in constant contact with each other during the deepfake creation process. Generator figures out how to make pictures that will fool the discriminator and cause it to produce a picture as that appears to be a genuine one. As the discriminator runs, it figures out how not to be tricked generator and the images it produces. The better the discriminator is, the harder it will be for the generator to make reasonable pictures and, in the end, the better work it will achieve.
Deepfakes and other technology using artificial intelligence and machine learning will only continue to evolve and produce even more convincing fake videos and images. The ability to positively detect deepfakes will lie in the parrel advancement of the same technology with an emphasis on detection and prevention. Permanently eradicating deepfakes is something that won’t be possible, but to slow down the spread of them as technology advances certainly is. In our next blog, we will take a closer look at the technology [algorithms] that are being developed to positively identify deepfakes. This tech was developed by two of the largest tech companies in the world via a partnership with the goal of making it easier for the common internet user to identify and discredit deepfakes being spread on the internet.