Blog: Beyond the Surface: A Multidimensional Approach to Tackling the Deepfake Crisis

In the era of swiftly advancing digital technology, deepfakes have emerged as a pressing concern. These sophisticated artificial intelligence creations not only challenge the concept of truth in the media but also pose serious threats, especially in the form of gender-based violence. Heightened regulatory norms around platform liability and a focus on taking down problematic content have risen in response to this menace. However, does erasing the digital footprint of a deepfake eliminate the psychological impact on the survivor? Do these reactive measures effectively address the potential for recurrence? Are we, as a society, too narrowly focused on the aftermath of deepfakes, neglecting the root causes and broader implications?

These questions emphasise the necessity for a more nuanced approach — one that transcends conventional methods and delves into more multifaceted dimensions.

The Whole of Society Perspective to Deepfakes

The prevailing strategy of relying on legal action and takedowns, while crucial, tends to overlook the societal underpinnings and technological catalysts that facilitate the creation and dissemination of deepfakes. This challenge precisely aligns with The Dialogue’s policy framework on tackling Tech Facilitated Gender-Based Violence, which aims to solve this problem. The framework establishes a six-pillar approach of access, prevention, intervention, response, recovery, and research to address digital harms more holistically.

Let’s consider the fictional case of an 18-year-old girl named Rhea to see how this framework approach can yield a more practical and effective solution. Rhea, a college freshman, finds herself trapped in the whirlwind of toxic ragging, now taking a deeply disturbing digital form. Her seniors have been creating deepfake videos, tarnishing her image and causing immense psychological distress. This example underscores the lack of awareness and education around digital safety and ethics in educational institutions. Programs or modules addressing responsible online behavior and the consequences of digital violence might have made Rhea’s seniors more cognizant before engaging in such acts. This situation emphasizes the necessity of integrating digital literacy into educational curricula as a non-negotiable prevention strategy to foster a responsible and empathetic online community.

Another poignant question arises: why did Rhea feel unable to discuss her ordeal with her parents or professors despite the distress caused by the deepfake videos? This silence points to a deeper societal issue—the stigma and taboo surrounding victims of digital abuse. Why do we, as a society, continue to undermine the importance of breaking these taboos? How can we create an environment where victims like Rhea feel safe and supported in speaking out? Why aren’t we prioritizing the mental well-being of survivors as much as the prosecution of perpetrators, and how can we better integrate these support services to make them more accessible to those in need?

On the technological front, innovative safety features like AI-enabled identification of harmful content, in-app reporting, and direct access to grievance officers represent significant strides in safety by design. These tools and services can play a crucial role in the early detection and prevention of digital abuse. However, why isn’t there more fervent discourse and prioritization around these features that companies are developing? How can we encourage more proactive measures and innovations in digital platforms to protect users like Rhea?

Similarly, why does the discourse often exclusively center on punitive measures, such as identifying the applicable sections of the IPC and IT Act under which the perpetrator can be booked? While legal action is undoubtedly important, why aren’t we focusing on research that helps understand the psychology of the perpetrators? What drives someone to create and distribute deepfake content, and how can this knowledge be used to prevent such incidents?

Embracing a holistic strategy

In essence, a focus that disproportionately emphasizes certain aspects while neglecting others proves inadequate in addressing the problem at its root. By acknowledging and addressing all facets of the deepfake issue with equal vigor, we transcend the limitations of conventional approaches. This comprehensive strategy calls for collective efforts from all societal actors, moving beyond isolated instances of success and failure. In doing so, we equip ourselves not only to confront present challenges like deepfakes but also to navigate future technological dilemmas. It is a call to action for all stakeholders—policymakers, educators, technology developers, and citizens—to unite in safeguarding the integrity of our digital interactions. Together, we strive for a digital landscape that fosters responsibility, empathy, and resilience.

📚Access our white paper ‘Prevention, Detection, Reporting, and Compliance: A Comprehensive Approach towards Tackling Deepfakes in India’ which  sheds light on the dynamic intersection of synthetic media and deepfakes, emphasizing the need for tailored regulatory approaches. Access the publication here.

🎧 Our podcast episode ‘#SaferInternetDay- Unpacking Deepfakes for a Safer Digital Future’ treads on this subject. Listen here

Authors:

Senior Programme Manager - Platform Regulation, Gender and Tech