Contact Form

Sticky Contact Form

The Rise of Deepfakes: Understanding the Technology, Risks, and Implications

By - King Stubb & Kasiva on April 25, 2023

Deepfakes are computer-generated images or videos of people doing or saying things they would never do or say in real life. They are created using machine learning and artificial intelligence (“AI”) algorithms that analyze and alter current videos or photographs to create “altered” new content. Deepfakes are being utilized to spread propaganda, deception, and false information. They are also recognized as “Deep fake news.”

They can deceive others into believing something that never happened. Even though deep fakes can be a form of “harmless entertainment,” they have generated worries about fake information, security, and privacy due to their abuse potential. As a result, it is critical to be aware of the dangers of deep fakes and deep fake legal issues and to exercise utmost caution when determining the reliability of online content.

Victims of Deepfakes: The Harm and Consequences

Deepfakes have been utilized for malevolent purposes in pornography, with 96% of deepfakes consisting solely of pornographic films aimed at women. Deepfake pornography causes great psychological harm to its viewers because it regards women as sexual objects and emotionally bothers them. It may even result in monetary loss and job loss in rare cases.

Deepfake legal issues arise due to the potential harm caused by such malicious content. Deepfakes, in addition to pornography, can represent people engaged in immoral behavior and saying terrible things they never actually spoke about, causing them great suffering and potentially destroying their reputation. Even if the victim can disprove the deepfake, the damage has already been done. The danger of deepfakes goes beyond just the individual victims, as they endanger society by fostering a culture of factual relativity and undermining public trust in traditional media. This loss of trust may have serious ramifications for the democratic process and civic society because of the deployment of sophisticated forgeries to undermine institutions and public safety.

Deep fake news can be utilized by non-state actors such as terrorist and insurgent organizations to stir unrest and influence public opinion, in addition to malicious nation-states to create conflict and uncertainty. The risk of deep fake news being circulated has become a significant concern in recent times. Politicians may use deep fakes and fake news to discredit legitimate media and the truth, resulting in a destructive alternative facts narrative.

The danger of deepfakes is a pressing issue with significant legal implications such as:

Intellectual Property Issues

Deepfakes, by definition, are based on intellectual property rights-protected content. As a result, the creation and distribution of deepfakes may violate copyright laws. Under Section 51 of the Copyright Act, 1957, Infringement is defined as a breach of a copyright holder’s exclusive rights, such as the right to reproduce, distribute, and display a protected work.

Deep fake news has the potential to jeopardize public perception and trademark rights. If a deepfake represents a person or organization acting in a way that would be detrimental to their reputation or goodwill, it may violate their trademark or publicity rights. The Trademarks Act, 1999 protects these rights in India, and Article 21 of the Indian Constitution recognizes the right to publicity as a component of the right to privacy.

Privacy and Security Issues

Privacy concerns are among the most serious deep fake legal issues. Deepfakes can be utilized to create fake movies or images of people in compromising or embarrassing situations without their permission or knowledge. The right to privacy is recognized as a fundamental right in Article 21 of the Indian Constitution. However, there are no laws in India that specifically address the issue of deepfakes, currently.

Deepfakes also represent significant security risks. Deepfakes can be used to spread false information, incite violence or riots, and influence elections. In India, false information and hate speech have resulted in mob violence and riots. The growth of deepfakes may exacerbate the problem.

Criminal Implications

Deepfakes can be used for illegal objectives like impersonation or fraud. For instance, it might be produced to impersonate a public official and be used to propagate false information or engage in fraudulent actions. Such actions may be punished in India under the Indian Penal Code (“IPC”) or the Information Technology Act, 2000 (“IT Act”).

Deepfake creators and distributors may face criminal or legal penalties for their acts. Deepfakes that infringe on the rights of others may result in legal or criminal responsibility in India under the Copyright Act, the Trademarks Act, or the IT Act. Deepfakes used for criminal objectives may also be punishable under the IPC.

Conclusion: Need of the hour

There are currently a limited number of statutes such as the IPC and IT Act. These can be used to address the dangers of deep fakes. The penalty for defamation is specified under Section 500 of the IPC. Further, Sections 67 and 67A of the IT Act make explicit sexual content illegal.

For example, during election time, the Representation of the People Act, 1951 forbids the production or dissemination of false or misleading information about candidates or political parties. However, these provisions are insufficient. To help ensure their correctness and impartiality, the Election Commission of India has established guidelines requiring registered political parties and candidates to obtain prior authorization for any political advertisements on electronic media, including television and social media websites. However, these regulations do not address the potential threats posed by deep fake content.

There exists a lag between the current laws in India and the challenges and difficulties posed by emerging technologies. In India, the legal framework for AI is insufficient to meet the myriad of problems generated by AI algorithms. It is necessary to enact separate legislation to control the illicit use of deep fakes and AI in general. Legislation should not hamper the development of AI, but it should recognize that deep fake technology may be used to commit crimes and provide guidelines to restrict its use in such circumstances. The upcoming Digital Personal Data Protection Bill, 2022 may also aid in resolving this issue. The concept of self-regulation cannot always be relied upon.

King Stubb & Kasiva,
Advocates & Attorneys

Click Here to Get in Touch

New Delhi | Mumbai | Bangalore | Chennai | Hyderabad | Mangalore | Pune | Kochi | Kolkata
Tel: +91 11 41032969 | Email: info@ksandk.com


Liked this Article ?

Join our list to receive more such updates

Subscription Form

By entering the email address you agree to our Privacy Policy.

King Stubb & Kasiva

Offices In - New Delhi | Bangalore | Mumbai
Chennai | Hyderabad | Kochi | Pune | Mangalore

Subscription Form