Race against deepfake ads: Will AI regulation catch up?

In compliance with the order, India’s ministry of information and broadcasting has released comprehensive guidelines outlining the procedure for obtaining this certificate, which became necessary before releasing any new advertisement from 18 June onwards.

While there is an ongoing broader debate on the shift’s impact on advertisers, this article focuses specifically on advertisements employing deepfakes made by GenAI on social media platforms such as Instagram, Facebook and YouTube.

In an op-ed published last year titled ‘Urgently needed: A law to protect consumers from deep fake ads,’ I underscored the rising menace of deepfake ads making misleading or fraudulent claims, thereby adversely affecting the rights of consumers and public figures. 

Also read: Ministry of I&B tweaks self-declaration rules for advertisers

This is evident from a 12-month survey conducted by McAfee, in which 75% of Indians responded that they encountered some form of deepfake content, with 38% saying they had fallen victim to deepfake scams and 18% directly affected by such fraudulent schemes. Alarmingly, among those targeted, 57% mistook celebrity deepfakes for genuine content.

Deepfake menace: In my op-ed, I had argued that although deepfake ads can be taken up under the aegis of the Consumer Protection Act (Section 2(9), 2(28) and 2(47)) and its guidelines on Misleading Advertisements and Dark Patterns, Digital Personal Data Protection Act, 2023 (Section 6), Information Technology Act, 2000 (Sections 66C, 66D, 79 and 66E), and Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (R. 4(2) and 3(1)(b)), if the identities of these advertisers are unknown, as is often the case, then regulators do not have much scope of imposing penalties.

Accordingly, I suggested that the government should implement preventive measures to ensure advertisers do not employ non-consensual deepfakes (for example), coupled with mandating online platforms to develop effective mechanisms to combat such deceptive practices.

Although the ministry’s recent guidelines do not specifically mention disclosure of any AI usage in self-certification, it is a welcome step as this self-declaration will require authorized representatives of advertisers to share their bonafide details along with final versions of advertisements to back their declarations. 

This measure promises to resolve challenges related to identifying and locating advertisers, facilitating tracking once complaints are filed. Moreover, it empowers courts to levy substantial fines on offenders.

However, industry bodies such as the Indian Internet and Mobile Association of India (IAMAI), Indian Newspaper Association (INS) and the Indian Society of Advertisers (ISA) have raised concerns over the newly adopted pre-release rules, arguing that this imposition of an extra layer of compliance is a heavy burden on advertisers, particularly smaller ones.

While these concerns are genuine, ad-industry bodies have an opportunity to strengthen their case in the Supreme Court for streamlining what is clearly a burdensome compliance mechanism. The online use of deepfakes of unknown origin has exposed the ineffectiveness of the current regulatory mechanism in controlling misleading advertisements on any medium whatsoever. 

By way of argument, the ad industry could assert that while the idea of self-certification has merit, the process must be simplified so that advertising is not hampered as a legitimate instrument of business.

An unreal challenge: Can any supervisory system prove effective against AI-enabled deepfake ads? The challenge lies in the increasing volume of digital ads overall, which would place an additional burden on regulators if they opt to review every ad submitted. Moreover, it’s unclear how easily even experts can tell dubiously motivated deepfakes apart from genuine ads that comply with rules.

Hence, a possible solution, at least for online ads, demands imposing obligations on social media platforms to filter out deepfake ads, as they may have the requisite technology and resources for efficient filtering.

Industry bodies have suggested tasking its long-standing self-regulatory model with checking violations of advertising norms, instead of imposing a new compliance burden. However, this model has not proven effective and only served in the past to forestall regulatory action.

As the point is to see that online audiences are not exposed to deception, social media intermediaries must share responsibility.

This has also been underlined by the ministry of electronics and information technology in its March 2024 advisory, which highlighted negligence of social media intermediaries in fulfilling their due diligence obligations outlined under IT Rules (R. 3(1)(b)). 

Although non-binding, the advisory stipulates that an intermediary must not “permit its users to host, display, upload, modify, publish, transmit, store, update or share any unlawful content”.

The Supreme Court will next hear the matter on 9 July, when industry bodies are expected to present their views on the new guidelines.

The country needs to confront the growing threat of dark patterns in online ads. The apex court’s intervention could not only address shortcomings of current regulatory approaches but also set a precedent for robust measures against deceptive advertising practices.

Nayan Chandra Mishra is Research Assistant to Dr. C Raja Mohan at Council for Strategic and Defence Research and is currently working on the theme of global governance of new technologies.