Digital
Navi Technologies Ltd renamed to Navi Ltd
MUMBAI: Navi Technologies Ltd announced that it has officially changed its name to Navi Ltd, aligned with the company’s evolution from a technology-led disruptor to a holistic financial services destination built around the needs of Indian consumers.
The rebranding underscores the company’s focus to position itself as an integrated, customer-centric institution encompassing lending, insurance, asset management, and UPI services, committed to making money simple, seamless, and accessible for millions of Indians.
“The new name fits who we are today. Not only are we a technology provider, we are a full-fledged destination for financial services for our customers,” said Navi Group founder and executive chairman, Sachin Bansal. “It signals both simplification and scale – two ideas core to our philosophy.”
Navi Ltd MD & CEO Rajiv Naresh added, “This change aligns with the company’s evolution. While technology remains core to how we build, our focus today is broader. The new name reflects the company we’ve become: more integrated, more customer-focused, and ready for the next phase of growth.”
Digital
Govt tightens the screws on AI content with sharper IT rules
New norms bring labelling mandates and faster compliance timelines for platforms
NEW DELHI: Govt has moved sharply to police the fast-expanding world of AI content, amending its IT rules to formally regulate synthetically generated media and slash takedown timelines to as little as two hours.
The Union ministry of electronics and information technology (MeitY) notified the changes on February 10, with the new regime set to kick in from February 20, 2026. The amendments pull AI-generated content squarely into India’s intermediary rulebook, widening due-diligence, takedown and enforcement obligations for digital platforms.
At the heart of the change is a legal clarification: “information” used for unlawful acts now explicitly includes synthetically generated material. In effect, AI-made content will be treated on par with any other potentially unlawful information under the IT Rules.
Platforms must also step up user warnings. Intermediaries are now required to remind users at least once every three months that violating platform rules or user agreements can trigger immediate suspension, termination, content removal or all three. Users must also be warned that unlawful activity could invite penalties under applicable laws.
Offences requiring mandatory reporting, including those under the Bharatiya Nagarik Suraksha Sanhita, 2023 and the Protection of Children from Sexual Offences Act, must be reported to authorities.
AI-generated content defined
The amendments introduce the term “synthetically generated information”, covering audio-visual material that is artificially or algorithmically created, modified or altered using computer resources in a way that appears real and could be perceived as indistinguishable from an actual person or real-world event.
However, routine and good-faith uses are carved out. Editing, formatting, transcription, translation, accessibility features, educational or training materials and research outputs are excluded so long as they do not create false or misleading electronic records.
Mandatory labelling and metadata
Intermediaries enabling AI content creation or sharing must ensure clear and prominent labelling of such material as synthetically generated. Where technically feasible, the content must carry embedded, persistent metadata or provenance markers, including unique identifiers linking it to the generating computer resource.
Platforms are barred from allowing the removal or tampering of these labels or metadata, a move aimed at preserving traceability.
Fresh duties for social media firms
Significant social media intermediaries face tighter obligations. Users must be required to declare whether their content is AI-generated before upload or publication. Platforms must deploy technical and automated tools to verify these declarations.
Once confirmed as AI-generated, the content must carry a clear and prominent disclosure flagging its synthetic nature.
The takedown clock speeds up
The most dramatic shift lies in timelines. The compliance window for lawful takedown orders has been cut from 36 hours to just 3 hours. Grievance redressal timelines have been halved from 15 days to 7.
For urgent complaints, the response window shrinks from 72 hours to 36. In certain specified cases, intermediaries must now act within 2 hours, down from 24.
Platforms are required to act swiftly once aware of violations involving synthetic media, whether through complaints or their own detection. Measures can include disabling access, suspending accounts and reporting matters to authorities where legally required.
Importantly, the government has clarified that removing or disabling access to synthetic content in line with these rules will not jeopardise safe-harbour protection under Section 79(2) of the IT Act.
The message is unmistakable. As AI blurs the line between real and fabricated, the state is racing to keep pace. For platforms, the era of leisurely compliance is over. In India’s digital marketplace, synthetic content now comes with very real consequences.






