What are the deepfakes that come to make fake news look like a toy

From Scarlett Johansson's pink video to the Gabon coup attempt, truth and falsehood are dangerously confused

001 shutterstock 1456783511 1312x819 1 FAKE NEWS, Internet, Youtube, Gabon, internet, cybersecurity, Nancy Pelosi, Nea Famagusta, Scarlett Johansson, ARTIFICIAL INTELLIGENCE

It is known that Internet has its good side. Immediate information, for example. Or being able to find anything you want in a matter of seconds and more. However, it is also known that the internet has its bad side, its dark side. It can become a tool in the hands of every scammer and every criminal.

So far one of the biggest "thorns" of the internet is fake news. There are those that are bad and easy to understand but there are also those that have been set up in such a masterful way that you can hardly understand where the lie begins and ends.

Because, however, the Internet It is evolving as technology advances, it seems that in terms of information, the difficulties are ahead of us. Since the beginning of last year, the deepfakes that came to stay and make the fake news look like a… toy have started to appear more and more often.

In fact, there were deepfakes before 2019, however, an experienced eye could easily distinguish them. In fact, the first time this term was identified was in 2017.

In recent months, however, this has changed dramatically with experts warning that if all this is not controlled and even quickly, this phenomenon may soon develop into a scourge that will raise and lower governments in any corner of the globe!

shutterstock 1591720276 FAKE NEWS, Internet, Youtube, Gabon, internet, cybersecurity, Nancy Pelosi, Nea Famagusta, Scarlett Johansson, ARTIFICIAL INTELLIGENCE

It is essentially a new technological revolution and has nothing to do with the simple way one can set up fake news. Deepfakes even use the field of artificial intelligence. We are basically talking about audio or video, where someone appears to say or do things he never said or did!

The production of deep fakes is done, as already mentioned, through artificial intelligence and the ability of this technology to decrypt a person's movements and speeches. That is, to "read" the way a person moves when he speaks, the tone of his voice depending on when he is angry or happy and then turn into a mime so good that it practically makes it very difficult to distinguish between the real and the fake!shutterstock 1430571869 FAKE NEWS, Internet, Youtube, Gabon, internet, cybersecurity, Nancy Pelosi, Nea Famagusta, Scarlett Johansson, ARTIFICIAL INTELLIGENCE

About a year ago, Deeptrace, a company of cyber security based in Amsterdam, which manufactures tools for detecting fake videos, has published research aimed at quantifying the development of the deepfake phenomenon. According to the findings, the last ones in the first nine months of 2019 doubled compared to those of 2018 at 14.678!

shutterstock 1373765486 FAKE NEWS, Internet, Youtube, Gabon, internet, cybersecurity, Nancy Pelosi, Nea Famagusta, Scarlett Johansson, ARTIFICIAL INTELLIGENCE

The vast majority (96%) of these videos are pornographic. A much smaller number of videos target politicians or well-known businessmen. It is indicative that a 2% of the deepfake videos that the research identified in YouTube display personalities from the business world.

One of the first to feel in their skin what harm deepfakes can cause was the actress Scarlett Johansson who saw her face star in pornographic videos that she had never shot. Dutch TV presenter Dionne Stax and Indian journalist Rana Ayyub also starred in a similar video. In all three cases the result is the same. The videos were watched by millions of people and although they were downloaded from the platforms that were now posted, most believe that these women were actually starring in pink videos.

It should be mentioned, in fact, that in the case of the Indian journalist the creation of the deepfake video was made as a product of blackmail in order to stop investigating the rape and murder of an eight-year-old girl from Kashmir!

As you can easily see, however, not all deepfakes… are pink. According to the Wall Street Journal, a British business manager fell victim to an audio deep fake and, thinking he had spoken to the director of a Hungarian company, approved the 250.000 XNUMX shipment to an account given to him.

The President of the American Parliament has also fallen victim to deepfake Nancy Pelosi (her voice changed and her gestures slowed down, during her real speeches, in a way that made her appear drunk) but also the former and current president of the USA. Fake videos of both Obama and Trump can easily be found on the internet.

The most outrageous, outrageous and dangerous case, however, comes from faraway Gabon, where the long absence of the president combined with a potentially deepfake video has convinced the armed forces that the President is dead, or incapable of carrying out his duties. of. This resulted in In early January 2019 army units to occupy public radio and television.

Their chief officers made extensive reference to the situation causing the health problems of President Ali Bongo and stated that they themselves take care of the fate of the country. One and a half years later, no expert has decided with certainty whether the video was authentic or not. However, the coup failed, and Bongo gradually began to make public appearances again., remains president of the country.

shutterstock 731926345 FAKE NEWS, Internet, Youtube, Gabon, internet, cybersecurity, Nancy Pelosi, Nea Famagusta, Scarlett Johansson, ARTIFICIAL INTELLIGENCE

The effects of deepfakes on economics, politics, crime and communication are potentially enormous.

In reality, however, although so far those who seem to be in the crosshairs are the famous (from journalists and actors, to political and economic actors), ordinary people are the ones who are much more at risk, and that is because they obviously do not have the same ease in finding ways to defend against such attacks.

Some researchers have already developed some deepfake material detection systems, which examine elements such as lighting, shadows, facial movements to determine the authenticity of the content.

These researchers believe that because deepfakes are based on its technology artificial intelligence, some believe that artificial intelligence may be the solution to the problem. Another solution that is also preferred in this phase, is to add a kind of filter to the image of an image so that it "locks" can not be used to create deepfake content.
Undoubtedly, however, this is another great challenge.

In a recent report, the Brookings Institution summarizes the range of political and social risks posed by deepfakes: "distortion of democratic dialogue, manipulation of electoral processes, withdrawal of trust in institutions, corruption of journalists , undermining public safety ".

Source