Digital
Deloitte India’s GenW.AI puts low code AI on a fast track
Made-in-India platform aims to help enterprises build apps and agents at speed.
MUMBAI: Innovation just got a shortcut key. Deloitte India is set to launch GenW.AI, an indigenous, next-generation low-code platform designed to help enterprises rapidly prototype and deploy applications and AI agents, without the usual complexity that slows big ideas down. GenW.AI will make its official debut at the India AI Impact Summit in New Delhi next week, marking what Deloitte calls a category-first, fully India-built platform that brings low-code development and agentic AI together under one roof.
Built for speed and flexibility, GenW.AI is offered both on-premise and on cloud, giving enterprises full control over their data and intellectual property. The platform is designed to integrate seamlessly with enterprise technologies and a wide range of large language models, allowing organisations to adapt as sovereign and enterprise-grade AI ecosystems evolve.
At its core, GenW.AI is positioned as a democratiser of innovation. From stitching together data scattered across departments to building workflows, dashboards and explainable AI-driven decision tools, the platform aims to let teams build faster, cheaper and with fewer dependencies on large, bespoke technology programmes.
Deloitte South Asia chief operating officer Nitin Kini said enterprises are increasingly moving away from one-off AI projects towards platforms that allow business and IT teams to co-create safely. He noted that GenW.AI is built for leaders who want speed without sacrificing compliance, data privacy or long-term resilience.
The platform brings together a suite of tools under the GenW.AI umbrella. GenW App Maker enables rapid application development with integrations across databases, APIs and third-party services. GenW Playground focuses on data exploration and dashboard creation without the need for code. GenW RealmAI provides a secure, low-code environment to work with generative AI and Retrieval-Augmented Generation, while GenW Agent Builder allows teams to visually create and manage AI agents, from simple chatbots to complex multi-agent systems.
Deloitte India partner and chief disruption officer Jagdish Bhandarkar said the real challenge for enterprises is not whether to adopt low-code or AI, but how to do it with guardrails, scale and speed. He added that GenW.AI is designed to put innovation into the hands of domain experts, enabling them to solve everyday problems while maintaining enterprise-grade oversight.
GenW.AI is built to integrate with enterprise applications and ERPs through pre-built connectors, supports both open-source and enterprise LLMs, and includes security features such as role-based access, encrypted agent-to-agent communication and auditability. Deloitte India has already deployed the platform internally, refining it through real-world use before opening it up to clients.
As enterprises race to turn AI ambition into usable outcomes, Deloitte India’s GenW.AI is positioning itself as a home-grown platform built not just to experiment, but to scale ideas that are ready to work in the real world.
Digital
Govt tightens the screws on AI content with sharper IT rules
New norms bring labelling mandates and faster compliance timelines for platforms
NEW DELHI: Govt has moved sharply to police the fast-expanding world of AI content, amending its IT rules to formally regulate synthetically generated media and slash takedown timelines to as little as two hours.
The Union ministry of electronics and information technology (MeitY) notified the changes on February 10, with the new regime set to kick in from February 20, 2026. The amendments pull AI-generated content squarely into India’s intermediary rulebook, widening due-diligence, takedown and enforcement obligations for digital platforms.
At the heart of the change is a legal clarification: “information” used for unlawful acts now explicitly includes synthetically generated material. In effect, AI-made content will be treated on par with any other potentially unlawful information under the IT Rules.
Platforms must also step up user warnings. Intermediaries are now required to remind users at least once every three months that violating platform rules or user agreements can trigger immediate suspension, termination, content removal or all three. Users must also be warned that unlawful activity could invite penalties under applicable laws.
Offences requiring mandatory reporting, including those under the Bharatiya Nagarik Suraksha Sanhita, 2023 and the Protection of Children from Sexual Offences Act, must be reported to authorities.
AI-generated content defined
The amendments introduce the term “synthetically generated information”, covering audio-visual material that is artificially or algorithmically created, modified or altered using computer resources in a way that appears real and could be perceived as indistinguishable from an actual person or real-world event.
However, routine and good-faith uses are carved out. Editing, formatting, transcription, translation, accessibility features, educational or training materials and research outputs are excluded so long as they do not create false or misleading electronic records.
Mandatory labelling and metadata
Intermediaries enabling AI content creation or sharing must ensure clear and prominent labelling of such material as synthetically generated. Where technically feasible, the content must carry embedded, persistent metadata or provenance markers, including unique identifiers linking it to the generating computer resource.
Platforms are barred from allowing the removal or tampering of these labels or metadata, a move aimed at preserving traceability.
Fresh duties for social media firms
Significant social media intermediaries face tighter obligations. Users must be required to declare whether their content is AI-generated before upload or publication. Platforms must deploy technical and automated tools to verify these declarations.
Once confirmed as AI-generated, the content must carry a clear and prominent disclosure flagging its synthetic nature.
The takedown clock speeds up
The most dramatic shift lies in timelines. The compliance window for lawful takedown orders has been cut from 36 hours to just 3 hours. Grievance redressal timelines have been halved from 15 days to 7.
For urgent complaints, the response window shrinks from 72 hours to 36. In certain specified cases, intermediaries must now act within 2 hours, down from 24.
Platforms are required to act swiftly once aware of violations involving synthetic media, whether through complaints or their own detection. Measures can include disabling access, suspending accounts and reporting matters to authorities where legally required.
Importantly, the government has clarified that removing or disabling access to synthetic content in line with these rules will not jeopardise safe-harbour protection under Section 79(2) of the IT Act.
The message is unmistakable. As AI blurs the line between real and fabricated, the state is racing to keep pace. For platforms, the era of leisurely compliance is over. In India’s digital marketplace, synthetic content now comes with very real consequences.






