Digital
India ranks second globally for ransomware detections in 2025
Acronis report warns of surging AI-powered attacks, phishing dominance, and high lateral movement in Indian networks.
MUMBAI: India’s cybersecurity defences are getting a serious stress test, hackers aren’t just knocking on the door anymore, they’re moving in, redecorating, and throwing a ransomware party before anyone notices. Acronis, the global cybersecurity and data protection firm, dropped its biannual Cyberthreats Report for H2 2025 (titled “From exploits to malicious AI”) on 18 February 2026, drawing from telemetry across over one million endpoints via its Threat Research Unit and sensors.
The standout alarm for India: it claimed second place worldwide for ransomware detections trailing only the US with a hefty 31 per cent of all global detections. It also cracked the top 10 for publicly identified ransomware victims, logging 129 cases where organisations went public. More worryingly, India topped charts for lateral movement and mass infection activity, including the planet’s largest internal propagation incidents. Attackers aren’t content with breaching the perimeter; they’re spreading like wildfire inside networks, amplifying disruption and business pain.
Globally, cyberattacks kept climbing in 2025. Email-based threats rose 16 per cent per organisation and 20 per cent per user year-on-year, while phishing stayed king, driving 83 per cent of email threats in the second half and serving as the entry point for 52 per cent of attacks on managed service providers (MSPs). Attacks on collaboration platforms exploded from 12 per cent in 2024 to 31 per cent in 2025, turning tools like Teams and Slack into prime secondary vectors.
Other red flags from the report:
Powershell abuse ruled as the most misused legitimate tool, especially in Germany, the US, and Brazil.
All MSP-platform CVEs disclosed in 2025 earned High or Critical ratings.
AI turned operational for crooks: used for reconnaissance, ransomware negotiations (e.g., Global Group automating chats across victims), data exfiltration (GTG-2002 style), and even chilling social engineering like AI-generated “proof of life” images in virtual kidnapping scams.
Hotspots included India, the US, and the Netherlands for mass infections and lateral hops; South Korea led malware hits at 12% of users affected.
Ransomware favourites targeted manufacturing, technology, and healthcare sectors crippled by uptime demands. Top groups: Qilin (962 victims), Akira (726), Cl0p (517). Nearly 150 MSPs and telcos hit directly; over 7,600 public victims worldwide, with the US suffering 3,243. Newcomers Sinobi, TheGentlemen, and CoinbaseCartel joined the fray in H2.
Supply-chain woes persisted too, RMM tools like AnyDesk and TeamViewer got exploited, affecting over 1,200 third parties globally, with the US taking 574 hits. Akira and Cl0p led here again.
Acronis CISO Gerald Beuchelt summed it up bluntly, “As cyber threats evolve at an accelerated pace, 2025 has shown that attackers are not only scaling traditional methods like phishing and ransomware, but are leveraging AI to act faster, more efficiently, and at greater scale. This shift requires organisations to anticipate threats, automate defences, and build resilient systems capable of withstanding both traditional and AI-driven attacks.”
For Indian businesses, the message is clear: the threat landscape isn’t just heating up, it’s gone full inferno, with AI fanning the flames. Time to upgrade those digital fire extinguishers before the next breach burns brighter.
Digital
Trump bans Anthropic’s Claude from US federal use after AI policy clash
A software standoff leaves the President seeing red over Anthropic’s lines
WASHINGTON: President Trump has issued an executive order mandating that all federal agencies immediately stop using technology from Anthropic, the developer of the Claude AI models. The directive marks a total break between the administration and one of America’s leading artificial intelligence firms.
The decision follows a breakdown in contract negotiations between Anthropic and the Department of Defense. The central conflict involved Anthropic’s refusal to remove specific safety restrictions that would prevent its AI from being used for domestic mass surveillance or the operation of autonomous weapons systems.
While Anthropic maintained that these boundaries were essential for safety, the administration characterised them as a refusal to support national security requirements.
The order goes beyond a simple cancellation of services and introduces multiple layers of restrictions. Anthropic has been formally designated a “National Security Risk.” This classification effectively prevents any federal contractor or government partner from using the company’s software.
A six-month transition period has been established to allow agencies to migrate critical systems away from Claude and shift to alternative providers. During this time, departments are expected to review existing deployments and implement replacement solutions.
In addition, the General Services Administration has begun removing Anthropic from all approved federal vendor and procurement lists. This step ensures that no new federal contracts can be awarded to the company under current guidelines.
The vacuum created by the ban is already being filled by competitors. Shortly after the announcement, OpenAI reached a new agreement with the Pentagon to provide AI services. The administration has also indicated that it will expand its reliance on Elon Musk’s Grok AI platform for various government functions.
Anthropic has stated that it intends to challenge the order in court, arguing that the designation is legally unfounded for a domestic company.
It is important to note that the ban applies only to the United States federal government and its direct contractors. For individual users and private businesses in the United Kingdom and elsewhere, Anthropic’s services remain fully available and unaffected by the executive order.





