Digital
Govt tightens the screws on AI content with sharper IT rules
New norms bring labelling mandates and faster compliance timelines for platforms
NEW DELHI: Govt has moved sharply to police the fast-expanding world of AI content, amending its IT rules to formally regulate synthetically generated media and slash takedown timelines to as little as two hours.
The Union ministry of electronics and information technology (MeitY) notified the changes on February 10, with the new regime set to kick in from February 20, 2026. The amendments pull AI-generated content squarely into India’s intermediary rulebook, widening due-diligence, takedown and enforcement obligations for digital platforms.
At the heart of the change is a legal clarification: “information” used for unlawful acts now explicitly includes synthetically generated material. In effect, AI-made content will be treated on par with any other potentially unlawful information under the IT Rules.
Platforms must also step up user warnings. Intermediaries are now required to remind users at least once every three months that violating platform rules or user agreements can trigger immediate suspension, termination, content removal or all three. Users must also be warned that unlawful activity could invite penalties under applicable laws.
Offences requiring mandatory reporting, including those under the Bharatiya Nagarik Suraksha Sanhita, 2023 and the Protection of Children from Sexual Offences Act, must be reported to authorities.
AI-generated content defined
The amendments introduce the term “synthetically generated information”, covering audio-visual material that is artificially or algorithmically created, modified or altered using computer resources in a way that appears real and could be perceived as indistinguishable from an actual person or real-world event.
However, routine and good-faith uses are carved out. Editing, formatting, transcription, translation, accessibility features, educational or training materials and research outputs are excluded so long as they do not create false or misleading electronic records.
Mandatory labelling and metadata
Intermediaries enabling AI content creation or sharing must ensure clear and prominent labelling of such material as synthetically generated. Where technically feasible, the content must carry embedded, persistent metadata or provenance markers, including unique identifiers linking it to the generating computer resource.
Platforms are barred from allowing the removal or tampering of these labels or metadata, a move aimed at preserving traceability.
Fresh duties for social media firms
Significant social media intermediaries face tighter obligations. Users must be required to declare whether their content is AI-generated before upload or publication. Platforms must deploy technical and automated tools to verify these declarations.
Once confirmed as AI-generated, the content must carry a clear and prominent disclosure flagging its synthetic nature.
The takedown clock speeds up
The most dramatic shift lies in timelines. The compliance window for lawful takedown orders has been cut from 36 hours to just 3 hours. Grievance redressal timelines have been halved from 15 days to 7.
For urgent complaints, the response window shrinks from 72 hours to 36. In certain specified cases, intermediaries must now act within 2 hours, down from 24.
Platforms are required to act swiftly once aware of violations involving synthetic media, whether through complaints or their own detection. Measures can include disabling access, suspending accounts and reporting matters to authorities where legally required.
Importantly, the government has clarified that removing or disabling access to synthetic content in line with these rules will not jeopardise safe-harbour protection under Section 79(2) of the IT Act.
The message is unmistakable. As AI blurs the line between real and fabricated, the state is racing to keep pace. For platforms, the era of leisurely compliance is over. In India’s digital marketplace, synthetic content now comes with very real consequences.
Digital
Ethical AI must benefit society, not dominate it, says WFEB chief Sanjay Pradhan at IAA event
At Mumbai event, ethics expert urges businesses and governments to shape AI responsibly
MUMBAI: Artificial intelligence may be racing ahead at lightning speed, but its direction must still be guided by human conscience. That was the central message delivered by Sanjay Pradhan, president of the World Forum for Ethics in Business (WFEB), during the latest edition of IAA Conversations held in Mumbai.
The session was organised by the International Advertising Association (IAA) and the Artificial Intelligence Association of India (AIAI) in association with The Free Press Journal at the Free Press House on 7 March. Addressing a packed audience, Pradhan called for stronger ethical leadership to ensure AI remains a tool that benefits humanity rather than one that governs it.
“Artificial intelligence has rapidly become one of the most powerful technologies humanity has created,” Pradhan said. “It is unlocking breakthroughs in medicine, science and creativity at a pace unimaginable just a few years ago.”
But he warned that the same technology carries serious risks. AI, he noted, can amplify disinformation faster than facts can travel, compromise privacy, deepen discrimination and disrupt millions of livelihoods. Referencing concerns raised by AI pioneers such as Geoffrey Hinton, often called the godfather of AI, Pradhan stressed that the real challenge is not whether AI will shape the world, but whether humans will shape it with ethics and wisdom.
Structuring his talk around four guiding questions, why, what, how and who, Pradhan introduced the audience to WFEB’s emerging AI Ethics Partnership, a global platform aimed at advancing responsible artificial intelligence. He outlined four priority concerns that demand urgent attention: disinformation, bias and discrimination, data privacy and job security.
To make the idea of ethical AI easier to grasp, Pradhan offered a simple metaphor. Ethical AI, he said, is like a three layered cake. The outer layer represents the visible value ethical AI creates for businesses and society. The middle layer is organisational culture that moves ethics from written codes to everyday practice. The innermost layer, however, is the most crucial, the conscience of individual leaders.
Drawing from Indian philosophical thought through WFEB co-founder Ravi Shankar, Pradhan noted that while artificial intelligence can reproduce stored knowledge, true intelligence is boundless and rooted in conscience, creativity and compassion. Practices such as breathwork and meditation, he suggested, can help leaders develop the calm clarity needed for ethical decision making.
The event also featured a discussion with Maninder Adityaraj Singh, chief of staff and head of innovation at Rediffusion Brand Solutions Pvt Ltd, and Yash Johri, lawyer, Supreme Court of India.
Opening the session, IAA India chapter president Abhishek Karnani, highlighted the need for industries to understand and engage with AI responsibly.
“AI has to be befriended and understood,” added Rediffusion managing director and AIAI national convenor Sandeep Goyal. “Its ethical use will determine whether it becomes a friend or a foe.”
As AI continues to reshape industries and societies, Pradhan ended with a simple but powerful call to action. Businesses, governments and individuals must work together to ensure that the algorithms shaping the future reflect human values rather than just cold logic.








