Deepfakes and Generative Adversarial Networks are the latest fads of AI. Furthermore, deepfake technology is causing a lot of ruckus across the globe. Researchers and scientists are constantly working on this surreal concept. This technology is attracting academia as well as politicians towards it. Let us dive into this.
Deepfake AI, or Deepfake, is a deep learning concept which replaces faces of people in videos. For instance, you can replace your face with that of Barack Obama’s and mimic the movements. The word Deepfake AI is a combination of the english words - ‘deep learning’ and ‘fake’. Although ML and AI techniques have been faking content since decades, deepfake technology takes it to the next level.
In its core, deepfakes utilize the concept of Generative Adversarial Networks (GANs) and autoencoders. Their training is very similar to that of conventional neural networks and rely on data. The amount of data is directly proportional to the quality of deepfake.
Law and privacy expert, Danielle Citron expresses her concerns regarding the deepfake technology. Rebels have been using deepfake AI to make fake news articles and pornographic content of influential people. Indian journalist Rana Ayyub was one of the most prominent people to face controversy because of deepfakes. Someone used the deepfake technology to generate her intimate video which eventually became the talk in media houses.
Furthermore, there are various instances where deepfakes have targeted leaders including Putin, Obama, and Donald Trump. This is a matter of international concern because it might incite rebellion in unstable nations.
The majority of countries follow democratic and socialist systems. Statements given by Presidents and PMs can fluctuate situations in these countries. Rebels are constantly using deepfake AI to spread rumors and invoke violence and communal hatred in the society. Furthermore, maleficent use of this technology is responsible for spreading fake propaganda. This is resulting in unnecessary violence, protests, rage, and cyber-mob attacks on the victims.
Attackers are using deepfakes for generating fake videos as well as audio clips. Even the most rational people are believing in these doctored clips due to their authenticity. Most victims are losing their moral control over the populace because of fake propaganda. Furthermore, this is also a threat to journalists as well as news anchors. A single video clip with disturbing messages can lead to a democratic disaster.
Unsolicited sharing of videos without checking their source aids the intention of the rebels. If we don’t question the authenticity of a video/audio, we will blindly believe it. In this scenario, the most feasible solution is to integrate blockchain with the video files. We can come up with a system which can verify the videos with the help of a public hash code. This will also help to trace the source of the video and audio to catch the culprits.
Besides this, it is our responsibility to verify a file or message before sharing it in our network. Even if someone creates a deepfake, we can always stop it from becoming viral news. Despite the innovation, deepfake technology is indeed a threat to a democracy. It's very important to keep our eye on the regulations taken by the governments to tackle this problem.