Digital
OpenAI partners with IIT, IIM, Aiims in AI education drive
From IIT to AIIMS, six institutions bring AI into everyday learning
NEW DELHI: OpenAI has teamed up with six of India’s leading higher education institutions to weave artificial intelligence into the fabric of campus life, aiming to build a generation of graduates ready for an AI-first economy.
The first cohort spans management, medicine, engineering, creative disciplines and multidisciplinary education. It includes the Indian Institute of Technology Delhi, Indian Institute of Management Ahmedabad, All India Institute of Medical Sciences New Delhi, Manipal Academy of Higher Education, University of Petroleum and Energy Studies and Pearl Academy.
The initiative is designed to go beyond basic access to AI tools. Instead, it focuses on helping students, faculty and staff use AI to deepen learning, sharpen critical thinking and accelerate research, all within responsible-use and academic-integrity frameworks.
OpenAI India head of education Raghav Gupta, said the shift is essential as workplaces evolve. He noted that nearly 40 per cent of core skills are expected to change by 2030, largely driven by AI. “Education institutions are a critical route to bridge the gap between what AI tools can do and how people are actually using them,” he said.
Over the coming year, the collaboration is expected to support more than 100,000 students, faculty members and staff. The programme will introduce campus-wide chatgpt edu access, structured onboarding, discipline-specific guidance and responsible-use policies tailored to each institution.
AI skills will be embedded into everyday academic workflows, from advanced prompting and coding to analytics, simulations, case studies and research support. The initiative will also bring hackathons, build days and research-to-deployment projects, culminating in industry days that connect campus innovations with startups and enterprises.
IIM Ahmedabad and Manipal Academy of Higher Education will also roll out OpenAI certifications, creating structured AI learning pathways in business and multidisciplinary programmes.
Beyond university campuses, OpenAI is collaborating with edtech platforms Physics Wallah, upGrad and HCL guvi to launch structured courses on AI fundamentals and practical ChatGPT use cases. The aim is to extend AI fluency to students and early-career professionals across the country.
Across the six institutions, the focus areas vary. IIT Delhi will concentrate on engineering research, prototyping and industry-linked innovation. IIM Ahmedabad will embed AI across management disciplines, from strategy and finance to entrepreneurship and public policy.
At Aiims New Delhi, the collaboration will explore AI-driven medical education, including simulation, clinical documentation and evidence synthesis, while setting safety and quality benchmarks. Manipal Academy will focus on cross-disciplinary research and large-scale AI literacy across programmes.
UPES plans to integrate AI across engineering, business, law, design and health sciences, positioning it as a core academic and operational tool. Pearl Academy will apply AI to creative workflows, from fashion and branding to digital media, giving students practical exposure to AI-driven design.
Taken together, the initiative signals a broader shift in Indian higher education from simply offering AI access to building institutions that think, teach and create with it at their core.
Digital
Govt eyes curbs on misleading AI ads targeting children & women: Ashwini Vaishnaw
Ashwini Vaishnaw says new safeguards under discussion to boost online safety
NEW DELHI: The government is examining fresh measures to curb misleading advertisements and harmful content targeting children and women on digital platforms, union minister for electronics and IT Ashwini Vaishnaw told the Lok Sabha on Wednesday.
Responding to a question from a member, Vaishnaw said ensuring the safety of children and women across social media platforms has become an urgent priority as the digital ecosystem expands rapidly.
“The safety of children on all social media platforms and the safety of women against misleading advertisements is a very important point. We have to take all steps required to ensure the safety of our children and the entire society on digital platforms, whether it is AI-generated material or content posted by publishers on social media platforms,” the minister said.
He added that discussions on stronger safeguards are underway and noted that there is “practically unanimity” among members of the consultative committee on the need for additional measures to protect citizens online. Vaishnaw also acknowledged the work of the Parliamentary Standing Committee on Communications and IT, chaired by BJP MP Nishikant Dubey, which recently examined the issue of online safety in detail.
Separately, in a written reply in Parliament, minister of state for electronics and IT Jitin Prasada said the government’s approach is aimed at building an “open, safe, trusted and accountable internet” for all users, particularly children.
He noted that existing legislation such as the Information Technology Act 2000 and the Digital Personal Data Protection Act 2023 already places obligations on social media platforms to prevent the hosting or sharing of unlawful or harmful content. Platforms must also remove such content within hours once notified by authorities.
Under the DPDP framework, additional safeguards are in place for children’s data. These include mandatory verifiable parental consent before platforms process the personal data of minors, along with restrictions on tracking, behavioural monitoring or targeted advertising directed at children.
In another response in Parliament, the government also flagged rising concerns around technology-enabled crimes against women, including cyberbullying, harassment and the misuse of deepfake technology.
To address these risks, amendments to the Information Technology Rules 2021 notified in February 2026 require social media platforms to deploy technical measures to prevent the creation and spread of unlawful AI-generated content. Platforms must also clearly label synthetic media that is permitted on their services.
As AI-generated content becomes easier to produce and distribute, policymakers are now weighing additional steps to ensure the digital world remains not just innovative, but safe for its most vulnerable users.








