Digital
Trump bans Anthropic’s Claude from US federal use after AI policy clash
A software standoff leaves the President seeing red over Anthropic’s lines
WASHINGTON: President Trump has issued an executive order mandating that all federal agencies immediately stop using technology from Anthropic, the developer of the Claude AI models. The directive marks a total break between the administration and one of America’s leading artificial intelligence firms.
The decision follows a breakdown in contract negotiations between Anthropic and the Department of Defense. The central conflict involved Anthropic’s refusal to remove specific safety restrictions that would prevent its AI from being used for domestic mass surveillance or the operation of autonomous weapons systems.
While Anthropic maintained that these boundaries were essential for safety, the administration characterised them as a refusal to support national security requirements.
The order goes beyond a simple cancellation of services and introduces multiple layers of restrictions. Anthropic has been formally designated a “National Security Risk.” This classification effectively prevents any federal contractor or government partner from using the company’s software.
A six-month transition period has been established to allow agencies to migrate critical systems away from Claude and shift to alternative providers. During this time, departments are expected to review existing deployments and implement replacement solutions.
In addition, the General Services Administration has begun removing Anthropic from all approved federal vendor and procurement lists. This step ensures that no new federal contracts can be awarded to the company under current guidelines.
The vacuum created by the ban is already being filled by competitors. Shortly after the announcement, OpenAI reached a new agreement with the Pentagon to provide AI services. The administration has also indicated that it will expand its reliance on Elon Musk’s Grok AI platform for various government functions.
Anthropic has stated that it intends to challenge the order in court, arguing that the designation is legally unfounded for a domestic company.
It is important to note that the ban applies only to the United States federal government and its direct contractors. For individual users and private businesses in the United Kingdom and elsewhere, Anthropic’s services remain fully available and unaffected by the executive order.
Digital
Ethical AI must benefit society, not dominate it, says WFEB chief Sanjay Pradhan at IAA event
At Mumbai event, ethics expert urges businesses and governments to shape AI responsibly
MUMBAI: Artificial intelligence may be racing ahead at lightning speed, but its direction must still be guided by human conscience. That was the central message delivered by Sanjay Pradhan, president of the World Forum for Ethics in Business (WFEB), during the latest edition of IAA Conversations held in Mumbai.
The session was organised by the International Advertising Association (IAA) and the Artificial Intelligence Association of India (AIAI) in association with The Free Press Journal at the Free Press House on 7 March. Addressing a packed audience, Pradhan called for stronger ethical leadership to ensure AI remains a tool that benefits humanity rather than one that governs it.
“Artificial intelligence has rapidly become one of the most powerful technologies humanity has created,” Pradhan said. “It is unlocking breakthroughs in medicine, science and creativity at a pace unimaginable just a few years ago.”
But he warned that the same technology carries serious risks. AI, he noted, can amplify disinformation faster than facts can travel, compromise privacy, deepen discrimination and disrupt millions of livelihoods. Referencing concerns raised by AI pioneers such as Geoffrey Hinton, often called the godfather of AI, Pradhan stressed that the real challenge is not whether AI will shape the world, but whether humans will shape it with ethics and wisdom.
Structuring his talk around four guiding questions, why, what, how and who, Pradhan introduced the audience to WFEB’s emerging AI Ethics Partnership, a global platform aimed at advancing responsible artificial intelligence. He outlined four priority concerns that demand urgent attention: disinformation, bias and discrimination, data privacy and job security.
To make the idea of ethical AI easier to grasp, Pradhan offered a simple metaphor. Ethical AI, he said, is like a three layered cake. The outer layer represents the visible value ethical AI creates for businesses and society. The middle layer is organisational culture that moves ethics from written codes to everyday practice. The innermost layer, however, is the most crucial, the conscience of individual leaders.
Drawing from Indian philosophical thought through WFEB co-founder Ravi Shankar, Pradhan noted that while artificial intelligence can reproduce stored knowledge, true intelligence is boundless and rooted in conscience, creativity and compassion. Practices such as breathwork and meditation, he suggested, can help leaders develop the calm clarity needed for ethical decision making.
The event also featured a discussion with Maninder Adityaraj Singh, chief of staff and head of innovation at Rediffusion Brand Solutions Pvt Ltd, and Yash Johri, lawyer, Supreme Court of India.
Opening the session, IAA India chapter president Abhishek Karnani, highlighted the need for industries to understand and engage with AI responsibly.
“AI has to be befriended and understood,” added Rediffusion managing director and AIAI national convenor Sandeep Goyal. “Its ethical use will determine whether it becomes a friend or a foe.”
As AI continues to reshape industries and societies, Pradhan ended with a simple but powerful call to action. Businesses, governments and individuals must work together to ensure that the algorithms shaping the future reflect human values rather than just cold logic.








