Digital
Ethical AI must benefit society, not dominate it, says WFEB chief Sanjay Pradhan at IAA event
At Mumbai event, ethics expert urges businesses and governments to shape AI responsibly
MUMBAI: Artificial intelligence may be racing ahead at lightning speed, but its direction must still be guided by human conscience. That was the central message delivered by Sanjay Pradhan, president of the World Forum for Ethics in Business (WFEB), during the latest edition of IAA Conversations held in Mumbai.
The session was organised by the International Advertising Association (IAA) and the Artificial Intelligence Association of India (AIAI) in association with The Free Press Journal at the Free Press House on 7 March. Addressing a packed audience, Pradhan called for stronger ethical leadership to ensure AI remains a tool that benefits humanity rather than one that governs it.
“Artificial intelligence has rapidly become one of the most powerful technologies humanity has created,” Pradhan said. “It is unlocking breakthroughs in medicine, science and creativity at a pace unimaginable just a few years ago.”
But he warned that the same technology carries serious risks. AI, he noted, can amplify disinformation faster than facts can travel, compromise privacy, deepen discrimination and disrupt millions of livelihoods. Referencing concerns raised by AI pioneers such as Geoffrey Hinton, often called the godfather of AI, Pradhan stressed that the real challenge is not whether AI will shape the world, but whether humans will shape it with ethics and wisdom.
Structuring his talk around four guiding questions, why, what, how and who, Pradhan introduced the audience to WFEB’s emerging AI Ethics Partnership, a global platform aimed at advancing responsible artificial intelligence. He outlined four priority concerns that demand urgent attention: disinformation, bias and discrimination, data privacy and job security.
To make the idea of ethical AI easier to grasp, Pradhan offered a simple metaphor. Ethical AI, he said, is like a three layered cake. The outer layer represents the visible value ethical AI creates for businesses and society. The middle layer is organisational culture that moves ethics from written codes to everyday practice. The innermost layer, however, is the most crucial, the conscience of individual leaders.
Drawing from Indian philosophical thought through WFEB co-founder Ravi Shankar, Pradhan noted that while artificial intelligence can reproduce stored knowledge, true intelligence is boundless and rooted in conscience, creativity and compassion. Practices such as breathwork and meditation, he suggested, can help leaders develop the calm clarity needed for ethical decision making.
The event also featured a discussion with Maninder Adityaraj Singh, chief of staff and head of innovation at Rediffusion Brand Solutions Pvt Ltd, and Yash Johri, lawyer, Supreme Court of India.
Opening the session, IAA India chapter president Abhishek Karnani, highlighted the need for industries to understand and engage with AI responsibly.
“AI has to be befriended and understood,” added Rediffusion managing director and AIAI national convenor Sandeep Goyal. “Its ethical use will determine whether it becomes a friend or a foe.”
As AI continues to reshape industries and societies, Pradhan ended with a simple but powerful call to action. Businesses, governments and individuals must work together to ensure that the algorithms shaping the future reflect human values rather than just cold logic.
Digital
Govt eyes curbs on misleading AI ads targeting children & women: Ashwini Vaishnaw
Ashwini Vaishnaw says new safeguards under discussion to boost online safety
NEW DELHI: The government is examining fresh measures to curb misleading advertisements and harmful content targeting children and women on digital platforms, union minister for electronics and IT Ashwini Vaishnaw told the Lok Sabha on Wednesday.
Responding to a question from a member, Vaishnaw said ensuring the safety of children and women across social media platforms has become an urgent priority as the digital ecosystem expands rapidly.
“The safety of children on all social media platforms and the safety of women against misleading advertisements is a very important point. We have to take all steps required to ensure the safety of our children and the entire society on digital platforms, whether it is AI-generated material or content posted by publishers on social media platforms,” the minister said.
He added that discussions on stronger safeguards are underway and noted that there is “practically unanimity” among members of the consultative committee on the need for additional measures to protect citizens online. Vaishnaw also acknowledged the work of the Parliamentary Standing Committee on Communications and IT, chaired by BJP MP Nishikant Dubey, which recently examined the issue of online safety in detail.
Separately, in a written reply in Parliament, minister of state for electronics and IT Jitin Prasada said the government’s approach is aimed at building an “open, safe, trusted and accountable internet” for all users, particularly children.
He noted that existing legislation such as the Information Technology Act 2000 and the Digital Personal Data Protection Act 2023 already places obligations on social media platforms to prevent the hosting or sharing of unlawful or harmful content. Platforms must also remove such content within hours once notified by authorities.
Under the DPDP framework, additional safeguards are in place for children’s data. These include mandatory verifiable parental consent before platforms process the personal data of minors, along with restrictions on tracking, behavioural monitoring or targeted advertising directed at children.
In another response in Parliament, the government also flagged rising concerns around technology-enabled crimes against women, including cyberbullying, harassment and the misuse of deepfake technology.
To address these risks, amendments to the Information Technology Rules 2021 notified in February 2026 require social media platforms to deploy technical measures to prevent the creation and spread of unlawful AI-generated content. Platforms must also clearly label synthetic media that is permitted on their services.
As AI-generated content becomes easier to produce and distribute, policymakers are now weighing additional steps to ensure the digital world remains not just innovative, but safe for its most vulnerable users.








