Blog: Comparative Analysis of MeitY AI Advisory

On March 15th, 2024, the Ministry of Electronics and Information Technology issued a revised advisory to significant intermediaries and platforms in the country concerning the use of artificial intelligence (AI), effectively superseding the two-page note issued by them on March 1, 2024. This new advisory reinforces the importance of due diligence required by intermediaries and platforms under the Information Technology Act, 2000, and the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021.

In the table below, we present a comparative analysis of the new advisory against the old one, delineating the changes made to various provisions.

ProvisionOld Advisory
(Issued on 01/03/2024)
New Advisory
(Issued on 15/03/2024)
Explanation
Point 2(a)

Unlawful Content
“All intermediaries or platforms to ensure that use of Artificial Intelligence model(s) /LLM/Generative AI, software(s) or algorithm(s) on or through its computer resource does not permit its users to host, display, upload, modify, publish, transmit, store, update or share any unlawful content as outlined in the Rule 3(1)(b) of the IT Rules or violate any other provision of the IT Act.”  Every intermediary and  platform should  ensure that use of Artificial Intelligence model(s) /LLM/Generative AI, software(s) or algorithm(s) on or through its computer resource does not permit its users to host, display, upload, modify, publish, transmit, store, update or share any unlawful content as outlined in the Rule 3(1)(b) of the IT Rules or violate any other provision of the Information Technology Act 2000 (IT Act) and other laws in force.”  The new advisory is now extended to both intermediaries and platforms, applying to every player within the AI ecosystem beyond intermediaries.    Further,  the new advisory expands the scope of unlawful content beyond the delineations specified in Rule 3(1)(b) of the IT Rules and other provisions of the IT Act 2000. It now also encompasses content deemed unlawful under “other laws in force,” thereby broadening the definition of “unlawful content” outlined in the advisory.
Point 2(b)

Due Diligence Requirements
“All intermediaries or platforms to ensure that their computer resource do not permit any bias or discrimination or threaten the integrity of the electoral process including via the use of Artificial Intelligence model(s)/LLM/Generative AI, software(s) or algorithm(s).” Every intermediary and platform should ensure that its computer resource in itself or through the use of Artificial Intelligence model(s)/LLM/Generative AI, software(s) or algorithm(s) does not permit any bias or discrimination or threaten the integrity of the electoral process.”The new advisory introduces a clear distinction regarding the intermediary and  platform’s utilization, either independently or through the employment of AI models, LLM, GenAI, software, or algorithms, to prevent any bias, discrimination, or compromise on the integrity of the electoral process.   Besides, rule 2(b) extends the due diligence requirements on the AI procurer. When intermediaries and platforms use AI models/LLM/Generative AI, software, and algorithms developed by other third parties, the responsibility of ensuring the procured solution doesn’t permit any biases and discrimination falls on the intermediaries and platforms (AI procurers). 
Point 2 (c)

Under-tested/Un-reliable AI Models
“The use of under-testing / unreliable Artificial Intelligence model(s) /LLM/Generative AI, software(s) or algorithm(s) and its availability to the users on Indian Internet must be done so with explicit permission of the Government of India and be deployed only after appropriately labeling the possible and inherent fallibility or unreliability of the output generated. Further, ‘consent popup’ mechanism may be used to explicitly inform the users about the possible and inherent fallibility or unreliability of the output generated.” “Under-tested or unreliable Artificial Intelligence foundational model(s)/LLM/Generative AI, software(s) or algorithm(s) or further development on such models should be made available to users in India only after appropriately labeling the possible inherent fallibility or unreliability of the output generated.  Further, ‘consent popup’ or equivalent mechanisms may be used to explicitly inform the users about the possible inherent fallibility or unreliability of the output generated.”  The latest advisory eliminates the necessity of obtaining explicit permission from the government. Instead, it mandates that under-tested or unreliable AI models must be accessible in India only after being clearly labeled to notify users about the potential inherent fallibility or unreliability of the generated output.

– Further, in addition to the ‘consent popup’ advised earlier, intermediaries and platforms have also been given the choice to employ equivalent mechanisms to explicitly inform users about the possible inherent fallibility or unreliability of the output.
Point 2(d)

Informing the User
“All users must be clearly informed including through the terms of services and user agreements of the intermediary or platforms about the consequence of dealing with the unlawful information on its platform, including disabling of access to or removal of non-compliant information, suspension or termination of access or usage rights of the user to their user account, as the case may be, and punishment under applicable law.”Every intermediary and platform should inform its users through the terms of service and user agreements about the consequences of dealing with unlawful information, including disabling of access to or removal of such information; suspension or termination of access; or usage rights of the user to their user account, as the case may be, and punishment under the applicable law.”-The new advisory  retains the obligation for intermediaries and platforms to apprise their users about the ramifications of engaging with unlawful information. It has been clarified that these consequences encompass, among others, the removal of “such” unlawful information. In the previous advisory, the terminology used was “non-compliant information.”


Point 3

Addressing Synthetic Content
“Where any intermediary through its software or any other computer resource permits or facilitates synthetic creation, generation or modification of a text, audio, visual or audiovisual information, in such a manner that such information may be used potentially as misinformation or deepfake, it is advised that such information created, generated, or modified through its software or any other computer resource is labeled or embedded with a permanent unique metadata or identifier, by whatever name called, in a manner that such label, metadata or identifier can be used to  identify that such information has been created, generated or modified using computer resource of the intermediary, or identify the user of the software or such other computer resource, the intermediary through whose software or such other computer resource such information has been created, generated or modified and the creator or first originator of such misinformation or deepfake.”“Where any intermediary through its software or any other computer resource permits or facilitates synthetic creation, generation or modification of a text, audio, visual or audiovisual information, in such a manner that such information may be used potentially as misinformation or deepfake, it is advised that such information created, generated, or modified through its software or any other computer resource is labeled or embedded with a permanent unique metadata or identifier, in a manner that such label, metadata, or identifier can be used to identify that such information has been created, generated, or modified using the computer resource of the intermediary. Further, in case any changes are made by a user, the metadata should be so configured to enable identification of such user or computer resource that has effected such change.The new advisory maintains the requirement for intermediaries to utilize labels, metadata, or unique identifiers to AI s generated, modified, or created synthetic information. While identifying the creator or first originator  of misinformation/deepfake is removed from the revised version, however  the new advisory stipulates that if changes are made by a user, the metadata should be configured to enable the identification of such user or computer resource responsible for those modifications.  In a way, the first originator of the misinformation or deepfake could still be identified, as the metadata would be configured to provide information on any synthetic changes made to the information. 
Point 4

Consequences of Non-Compliance
“It is reiterated that non-compliance to the provisions of the IT Act and/or IT Rules would result in potential penal consequences to the intermediaries or platforms or its users when identified, including but not limited to prosecution under IT Act and several other statues of the criminal code.”“It is reiterated that non-compliance with the provisions of the IT Act 2000 and/or IT Rules could result in consequences including but not limited to prosecution under the IT Act 2000 and other criminal laws, for intermediaries, platforms and their users.” The new advisory explicitly includes criminal laws in the country within the scope of legislation under which intermediaries, platforms, and their users may face prosecution for non-compliance with the provisions of the IT Act 2000 and/or IT Rules. Previously, the language of this provision referred to “several other statutes of the criminal code.”
Point 5

Compliance Requirements


“All intermediaries are, hereby requested to ensure compliance with the above with immediate effect and to submit an Action Taken-cum-Status Report to the Ministry within 15 days of this advisory.”“All intermediaries are, hereby requested to ensure compliance with the above with immediate effect.”Intermediaries are no longer obligated to submit an action taken-cum-status report. However, compliance with the directives outlined in the advisory remains mandatory with immediate effect.

Authors:

Senior Programme Manager - Privacy, Data Governance and AI

Senior Programme Manager - Emerging Technologies