IT Rules Amendment 2025: Deepfakes and Unlawful Content Regulation

 New IT rules on Al, Deepfakes 

Rashmika Mandanna Slams Viral Deepfake Video: "Extremely Scary"

India to crack down on deepfakes, new rule may force companies to label Al-generated content

Background and Rationale 

India's digital ecosystem has witnessed a surge in Al-generated content, popularly known as "deepfakes" highly realistic synthetic videos, images, or audio created or altered through artificial intelligence.

These technologies, while useful for creative and educational purposes, have also been misused for:

  • Political misinformation and election manipulation
  • Non-consensual sexual imagery and impersonation
  • Defamation, fraud, and reputational harm

To tackle these concerns, the Government has proposed the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025, targeting synthetic and unlawful online content.

Don't misuse Al in Bihar poll campaign: Election Commission's warning to parties 

IT Rules Amendment 2025: Deepfakes and Unlawful Content Regulation 

Issued by: Ministry of Electronics and Information Technology (MeitY), Government of India

Date: Draft released on 22 October 2025

Open for Comments: Till 6 November 2025

Proposed Enforcement: From 15 November 2025 (expected)

Legal Basis: Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 - amendment under Section 87 of the IT Act, 2000.

Key Highlights of the Draft IT Rules (Amendment 2025) 

New Definition Introduced: "Synthetically Generated Information" 

A new clause, Rule 2(1)(wa), defines this as:

"Information that is artificially or algorithmically created, generated, modified, or altered using a computer resource, in a manner that appears reasonably authentic or true."

This brings deepfakes, Al-edited images, Al voices, and Al-generated text explicitly under the IT Rules.

Mandatory Labelling and Metadata Requirements 

To ensure authenticity and transparency, all synthetic content must carry both visible and machine-readable labels.

Key provisions: 

  • Every piece of synthetically generated content must have a permanent unique identifier.
  • It must be visibly displayed or audibly announced as synthetic.
  • Quantitative requirement:

  1. For visual media: the label must cover at least 10% of the surface area.
  2. For audio: the identifier must be announced within the first 10% of playback.

  • Metadata must contain the origin, tool used, and modification history, embedded in the file.

This rule applies to all intermediaries that create, modify, or host synthetic content.

Enhanced Due Diligence for Intermediaries 

A new Rule 3(3) is proposed, imposing special obligations on:

  • Creation platforms (e.g., Al video/image generators), and
  • Hosting platforms (e.g., social media networks).

They must:

  • Label and embed metadata in all Al-generated content.
  • Allow users to self-declare if content is synthetic.
  • Remove any unlawful synthetic content within 36 hours of "actual knowledge".

'Actual Knowledge' is defined narrowly to prevent arbitrary removal it is triggered only by:

  • A court order, or
  • A reasoned written order from a competent government authority.

Accountability and Oversight in Content Removal 

To enhance transparency and avoid misuse of takedown powers:

  • Takedown orders must now come from senior officers:

  1. Joint Secretary-level or above in the Central Government, or
  2. DIG-rank or equivalent in State Police.

  • Each takedown order must:

  1. Clearly specify the legal basis,
  2. Mention exact URLs/content IDs, and
  3. Provide a reasoned explanation.

  • Blanket or non-specific orders will no longer be permitted.

Safe-Harbour and Liability Protection 

The amendment preserves intermediaries' protection under Section 79 of the IT Act, provided they:

  • Act reasonably and diligently,
  • Remove unlawful content once they receive proper "actual knowledge", and
  • Maintain systems for user grievances and content verification.

Failure to comply with due-diligence obligations may result in loss of safe-harbour, exposing platforms to legal liability.

Objectives and Policy Justification 

  1. Combat misinformation and impersonation by ensuring users know what content is synthetic.
  2. Establish traceability and accountability through metadata and provenance records.
  3. Empower authorities with a formal, transparent process for takedown orders.
  4. Balance innovation and responsibility by maintaining safe-harbour for compliant intermediaries.
  5. Protect elections, public order, and individual dignity from misuse of deepfake technologies.

Global Context 

India joins a global trend of regulating Al-generated content:

  • European Union: Al Act and Digital Services Act require provenance labelling.
  • China: 2023 Deep Synthesis Regulations mandate watermarking of synthetic media.
  • United States: Ongoing legislative proposals for "Al-label laws" in California and federal bills.

India's 2025 Rules are distinctive for prescribing a quantitative visibility threshold (10%), among the strictest worldwide.

Practical Implications 

For Platforms

  • Must redesign creation tools to embed both visible and metadata labels.
  • Need to update user agreements requiring declaration of synthetic content.
  • Should deploy Al models to identify deepfakes and maintain compliance logs.
  • Have only 36 hours to act once valid "actual knowledge" is received.

For Content Creators

  • Must clearly label Al-generated content before upload.
  • Risk takedown or account suspension if metadata or label missing.
  • Can face penalties if deepfakes cause harm or impersonation.

For Government & Law Enforcement

  • Greater accountability: all takedown orders must be written, reasoned, and traceable to a senior official.
  • Helps curb fake news and digital impersonation, especially during elections.

Conclusion 

The IT Rules Amendment 2025 marks India's strongest policy step yet to regulate deepfakes and synthetic media.

By combining labelling mandates, metadata traceability, structured takedown protocols, and preserved safe-harbour protections, the draft seeks to balance innovation and accountability.

However, success will depend on:

  • How precisely the final rules are worded,
  • Whether implementation is technologically feasible, and
  • How well free-speech and creative freedoms are protected.

If executed with due safeguards, India could set a global precedent in responsible Al regulation and digital ethics.

0/Post a Comment/Comments