In the middle of the digital age, the concern about fake news has been occupying more and more the attention of public organizations, the media and personalities of all kinds. Not for nothing, Fake News have had a certain weight in important electoral processes such as the presidential elections of the United States in 2016, the referendum on the United Kingdom's exit from the European Union or the generals of Brazil that gave Bolsonaro victory.
Without having to go that far, in our country Facebook recently had to close down three large far-right networks that, through 30 pages, groups and duplicate accounts accumulated more than one and a half million followers and more than 7 million interactions . These groups were dedicated to spreading hoaxes and false images.
But now we are facing another type of digital threat that is generating new headaches. We are talking about the so-called Deepfake, a term that arises from the combination of “Deep Learning” and “Fake”. Basically it is a form of artificial intelligence that allows any user to edit fake videos and audios of people that seem real. To do this, antagonistic generative networks (AGRs) are used , a kind of algorithm that can create new types of data from other sets that already exist.
Ultimately, "deepfakes" are one more form of digital manipulation , and one of the most likely to be used for "trolling" in the best of cases. But how can we detect them? And, above all, what is being done by private institutions and companies to prevent its disastrous consequences? In this special we are going to review the efforts that are being carried out to stop this new digital scourge.
Why are deepfakes so dangerous?
The “deepfake” technology allows us to easily substitute the face of one person for that of another, as if it were a kind of mask or digital mask, to make us believe that he has said certain things that never really took place . As you can guess, these techniques have quite significant implications for determining the legitimacy of the information circulating on the internet.
Although they are often used to create humorous videos, the truth is that "deepfakes" have a dark potential to destroy a person's public image or to influence public opinion through the use of disinformation. Unfortunately we have to tell you that this misuse is more widespread than we would like, and with enormous success we should add.
A clear example is the DeepNude app, which allowed uploading the image of a person with clothes and creating a new one of the same naked person . Fortunately, it is already closed, but we must emphasize the ease of use of this type of tools for which no knowledge of editing is required since the algorithm itself does all the work.
In the case of DeepNude, the platform offered incredibly realistic results and was fully accessible through its website for Windows and Linux . And as expected, the montages with celebrities such as Katy Perry or Gal Gadot did not take long to appear on the network, to the point that the pressure from the lawyers of these actresses did not stop until important adult content websites removed the videos .
This is just the tip of the iceberg of the handling capacity that these types of applications can have. Now imagine the consequences of a campaign of this type directed against a certain political figure in order to manipulate an election process in a country or a region. Evil knows no limits.
How are deepfakes fought?
One of the first companies to speak out was none other than Google, which announced its firm will to combat deepfakes and, as they say in these cases, fire extinguishes fire. The technology giant confirmed the launch of an entire database of up to 3,000 videos manipulated with artificial intelligence (deepfakes) that have been created specifically to help refine the detection tools of researchers.
To do this, Google has hired real actors to record their faces and use them as a point of reference to determine if a video has been artificially altered. Using deepfake generation methods available to everyone, thousands of deepfakes are created from these recordings.
The resulting videos, real and fake, are uploaded to the collaborative development platform GitHub so that researchers fully understand what the system is producing. This database, as we say, is fully accessible, although they will first have to give you permission .
For its part, Facebook also plans to create a similar database by the end of this year. According to its chief executive officer, Mark Zuckerberg, the main problem is that the industry does not have a standardized system to detect them . That is why it has teamed up with the Association of AI, Microsoft, and academics from Cornell Tech, MIT, Oxford, UC Berkeley, University of Maryland, College Park, and the University of Albany-SUNY to build the Deepfakes Detection Challenge. (DFDC for its acronym in English).
This macro project will include an extensive database and a detailed classification , in addition to financial aid and donations to encourage as many collaborators as possible. The idea is to create a kind of community that helps detect and avoid manipulated videos through an AI.
There is no doubt that the proliferation of "deepfakes" has become a very serious issue, with severe consequences that cannot be ignored. Although the measures proposed by the main agents committed to this cause may seem impractical, or even counterproductive, in the long run, they may be the only way to eradicate this crime. Although it may seem counterintuitive, combating "deepfakes" with more "deepfakes" will help detection tools absorb more data to help them more easily locate these types of montages .