Skip to content
Step by Step Internet 馃寪 Guides for learning to surf the Net

Fake News or False News: DeepFake technology at the service of disinformation… How to protect ourselves?

UPDATED ✅ Do you want to know more about Fake News or False News and the danger of not recognizing them? ⭐ ENTER HERE ⭐ And learn everything now!

Imagine this: You click on a news video and see the “President of the United States” in a press conference with a leader from another country. The dialogue is real… the press conference is real… and you share it with a friend. They share it with other friends and soon, the whole world has seen it. Only later do you find out that the president’s head was superimposed on someone else’s body. and none of that really happened.

Sound unlikely? Not if you’ve seen a certain viral video of the YouTube user “Ctrl Shift Face(you can watch this video at the end of the article). Since August 2019, he has achieved more than 12.7 million views and more than 16,000 comments at the time of writing this article.

In that video, the American comedian bill hader shares a story about his encounters with Tom Cruise and Seth Rogen. As Hader, a skilled impersonator, does his best impression of Cruise and Rogen, the faces of those actors merge perfectly and terrifyingly with his. The technology of the Deep Fake makes Hader’s impressions that much more vivid, but it also illustrates how easy – and potentially dangerous – it is to manipulate video content.

But… What is a Deepfake?

What is a deepfake

Hader’s video is an expertly crafted deepfake, a technology invented in 2014 by Ian Goodfellowa doctoral student now working at Manzana. Most of the deepfake technology is based on Generative Adversarial Neural Networks (GANs).

GANs allow the algorithms go beyond data classification to generate or create images. This occurs when 2 GANs try to trick each other into thinking an image is “real”. With just one image, an experienced GAN can create a video clip of that person. In 2019, the Samsung Artificial Intelligence Center published research sharing the science behind this approach.

“Crucially, the system is able to initialize both the generator and discriminator parameters in a person-specific way, so that training can be based on just a few images and done quickly, despite the need to fine-tune dozens of images. million parameters.

“We show that this approach is capable of learning highly realistic and personalized talking head models from new people and even from portraits.”

The researchers behind the document said.

For now, this only applies to “talking head” videos. But when 47% of users view their news through online video content…What will happen when GANs can make people dance, clap or be manipulated in some way?

Why are Fake News or false news dangerous?

Why are Fake News or false news dangerous?

If we forget the fact that there are more than 30 countries actively involved in cyberwar at any time, so the biggest concern with deepfakes might be things like the ill-conceived website and app DeepNudeswhere the faces of celebrities and the faces of ordinary women could be superimposed on the content of a pornographic video.

The founder of Deepnudes ended up canceling the launch of the website, fearing that “the probability of people misusing it is too high.” Well, what else could people do with fake porn content?

“At the most basic level, deepfakes are lies disguised to look like truth,”

“If we take them as truth or evidence, we can easily draw false conclusions with potentially disastrous consequences”

Andrea Hickerson, director of the School of Journalism and Mass Communication at the University of South Carolina.

Much of the fear about deepfakes is rightly about politics, Andrea Hickerson says:What if a deepfake video portrays a political leader inciting violence or panic? Could other countries be forced to act if the threat were immediate?

A use that can be considered high risk is to produce “fake news” during election periods in any nation, and this added to the continuing threat of cyber attacks and cyber warfarewe have to seriously consider some “scary” scenarios:

  • Deep fakes will be used in electoral periods to ostracize, isolate and further divide the electorate of any country.
  • Deepfakes will be used to change and impact the voting behavior, but also the consumption preferences of hundreds of millions of people.
  • The deepfakes will be used in the spear phishing and in other attack strategies of cybersecurity known to target victims more effectively.

This means that deep fakes put companies, individuals and governments at greater risk.

“The problem is not GAN technology, necessarily”

“The problem is that bad actors currently have an inordinate advantage and there are no solutions to deal with the growing threat. However, there are a number of solutions and new ideas emerging in the AI ​​community to combat this threat. Even so, the solution must be the human being first”

Ben Lamm, CEO of the AI ​​company Hypergiant Industries.

A new danger: Deep fakes financial scams

 convert audio clips into a realistic lip-synchronized video
Steps to convert audio clips into a realistic lip-synchronized video

Do you remember your first robocall? Maybe not, considering that robocalls were pretty convincing a few years ago, when most of us still didn’t understand what they were. Fortunately, those scam calls have been declining: Complaints about robocalls are reported to have fallen by more than 60% in recent years.

However, deepfake technology applied to Fake Audio News could easily reinforce the deceptive tactic. According to Nisos, a cybersecurity company based in Alexandria, Virginia, hackers they are using the machine learning to clone people’s voices. In one documented case, hackers used deepfake synthetic audio in an attempt to defraud a tech company.

This came in the form of a voice message, which seemed to come from the CEO of the technology company. In the message, he asks an employee to call him back to “finalize an urgent business deal.”

“The recipient immediately thought it was suspicious and did not contact the number, instead referring it to their legal department, and the attack was unsuccessful as a result,” Nisos notes in a July 2020 whitepaper.

What is currently being done to combat Fake News?

The EU forces big technology companies to manage deepfakes

In recent years, the US House Intelligence Committee sent a letter to Twitter, Facebook and Google asking how social media sites were planning to combat deepfakes in the upcoming election. The investigation largely came after former President Donald Trump tweeted a deepfake video of House Speaker Nancy Pelosi.

In 2020, Facebook took a positive step towards banning deepfakes. In a blog post from January 6thMonika Bickert, Facebook’s vice president of global policy management, wrote that the company is making new efforts to “remove misleading manipulated media.”

Facebook is taking a targeted, two-pronged approach to flagging and removing deepfakes on its platform. For an image to be removed, must meet the following criteria, depending on the blog post:

  • It has been edited or synthesized (beyond clarity or quality adjustments) in a way that is not obvious to the average person and could lead someone to believe that a subject in the video has said words that they have not actually said.
  • It is the product of artificial intelligence or machine learning that blends, replaces, or overlays content onto a video, making it look authentic.

Nevertheless, satire and parody videos are still safe, as well as videos that have been edited solely to omit or change the word order. This means that manipulated media can still slip through the cracks. Notably, TikTok and Twitter have similar policies.

Meanwhile, government institutions such as DARPA and researchers from universities such as Carnegie Mellon, the University of Washington, Stanford University, and the Max Planck Institute for Computing are also experimenting with deepfake technology. Disney is doing it too.. These organizations are studying how to use GAN technology, but also how to combat it.

By feeding algorithms of real deepfakes and videos, they hope to help computers identify when something is a deepfake and when it’s real. If this sounds like an arms race, that’s because it is.. We are using technology to fight technology in a race that has no end.

Current news confirms that the European Union obliges large technology companies to manage and regulate deepfakes within their platforms.

What are the solutions against Fake News disinformation?

What are the solutions against Fake News disinformation

Perhaps the solution is not technology. Other recent research suggests that mice could hold the key. The researchers of University of Oregon Neuroscience Institute believe that “a mouse model, given the powerful genetic and electrophysiological tools for probing neural circuitry available to them, has the potential to powerfully increase the mechanistic understanding of phonetic perception”

This means that the mice could inform the algorithms next-generation devices that could detect fake video and audio. Nature could counter technology, but it’s still an arms race.

Although advances in Fake News technology could help detect fakes, it may be too late. Once trust in a technology is corroded, it is almost impossible to restore it. If we corrupt the credibility in the video, How long will it be before we lose confidence in television news, Internet clips, or live-streamed historical events? Maybe we’re already late…

“‘Fake news’ videos threaten our civic discourse and can cause serious psychological and reputational damage to individuals. They also make it even more difficult for platforms to engage in responsible online content moderation.

“While the public is understandably calling for social media companies to develop techniques to detect and prevent the spread of deepfakes,”

Sharon Bradford Franklin, Policy Director of the Open Technology Institute of New America.

If restrictive legislation is not the solution, should the technology be banned? Although many argue yes, new research suggests that GANs could be used to help improve multi-resolution schemes that enable better image quality and avoid patch artifacts on X-rays, and that other medical use scenarios could be available. around the corner.

Is that enough to offset the damage? Medicine is important, but it is also important to guarantee the basis of our trust in the content that we consume daily in different media.

How to detect a Deepfake? How do you know what is real and what is true?

Many people have already lost their trust in the news. And as deepfake technology grows, the number of fake news is only going to increase.

“The best way to protect yourself from a deepfake is to never take a video verbatim.” “We cannot assume that seeing is believing. The public should independently search for related contextual information and pay special attention to who and why someone shares a video. In general, people are careless about what they share on social media. Even If your best friend shares it, you should think about where she got it from. Who or what is the original source?”

Andrea Hickerson, director of the School of Journalism and Mass Communication at the University of South Carolina.

The solution to this problem has to be driven by individuals until regulators, technologists or technology companies can find a solution. But if there is no immediate push to find an answer, it might be too late.

What we should all do is hold the platforms that spread this information to account, that the government make efforts to ensure that the technology has enough positive use cases to offset the negative onesand that education ensures that we recognize deepfakes and have enough common sense not to share them.

Otherwise, we may find ourselves in a cyber war that a hacker started based solely on an edited video. Then what?

Web