Connect with us

Digital

Microsoft appoints Charlie Bell as AI-driven code raises quality concerns

Former security chief to oversee reliability as AI writes up to 30 per cent of code

Published

on

WASHINGTON: Microsoft has created a new senior role focused on engineering quality and appointed Charlie Bell, its former head of security, to the position, as concerns mount over software reliability in the age of AI-written code.

Bell will serve as engineering quality head and report directly to chief executive Satya Nadella, according to an internal memo published on the company’s blog on February 4. The appointment comes as artificial intelligence takes on a growing share of Microsoft’s software development workload.

Last year, Nadella said AI systems were generating between 20 per cent and 30 per cent of the company’s code. Chief technology officer Kevin Scott has since suggested that AI could be responsible for the majority of code generation by the end of the decade.

Advertisement

The move reflects broader unease across the industry about the quality of AI-generated software. Research has linked AI coding tools to higher levels of code churn, while Microsoft’s own studies have found developers are more likely to overlook bugs when reviewing machine-written code than human-authored work.

The focus on engineering quality also follows a series of reliability issues across Microsoft products. Windows 11 has suffered from several problematic updates in recent months, including security patches that disrupted system booting and shutdown. In response, Microsoft has redeployed engineers from new feature development to stabilisation efforts under an internal initiative known as “swarming”.

Bell joined Microsoft in 2021 after more than 20 years at Amazon and previously led the company’s security organisation. In his new role, he will operate as an individual contributor rather than managing large teams. Nadella said the shift was planned and reflected Bell’s desire to return to hands-on engineering work.

Advertisement

Succeeding Bell, Hayete Gallot has been appointed executive vice president for security. Gallot returns to Microsoft after a stint at Google Cloud and brings more than 15 years of prior experience at the company.

The appointment comes amid mixed results from Microsoft’s wider AI strategy. Adoption of Copilot across Microsoft 365 has remained modest, while the company has faced investor pressure following slower cloud growth and recent share price performance. Microsoft has also scaled back some Copilot integrations in consumer products.

Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Digital

Govt tightens the screws on AI content with sharper IT rules

New norms bring labelling mandates and faster compliance timelines for platforms

Published

on

NEW DELHI: Govt has moved sharply to police the fast-expanding world of AI content, amending its IT rules to formally regulate synthetically generated media and slash takedown timelines to as little as two hours.

The Union ministry of electronics and information technology (MeitY) notified the changes on February 10, with the new regime set to kick in from February 20, 2026. The amendments pull AI-generated content squarely into India’s intermediary rulebook, widening due-diligence, takedown and enforcement obligations for digital platforms.

At the heart of the change is a legal clarification: “information” used for unlawful acts now explicitly includes synthetically generated material. In effect, AI-made content will be treated on par with any other potentially unlawful information under the IT Rules.

Advertisement

Platforms must also step up user warnings. Intermediaries are now required to remind users at least once every three months that violating platform rules or user agreements can trigger immediate suspension, termination, content removal or all three. Users must also be warned that unlawful activity could invite penalties under applicable laws.

Offences requiring mandatory reporting, including those under the Bharatiya Nagarik Suraksha Sanhita, 2023 and the Protection of Children from Sexual Offences Act, must be reported to authorities.

AI-generated content defined
The amendments introduce the term “synthetically generated information”, covering audio-visual material that is artificially or algorithmically created, modified or altered using computer resources in a way that appears real and could be perceived as indistinguishable from an actual person or real-world event.

Advertisement

However, routine and good-faith uses are carved out. Editing, formatting, transcription, translation, accessibility features, educational or training materials and research outputs are excluded so long as they do not create false or misleading electronic records.

Mandatory labelling and metadata
Intermediaries enabling AI content creation or sharing must ensure clear and prominent labelling of such material as synthetically generated. Where technically feasible, the content must carry embedded, persistent metadata or provenance markers, including unique identifiers linking it to the generating computer resource.

Platforms are barred from allowing the removal or tampering of these labels or metadata, a move aimed at preserving traceability.

Advertisement

Fresh duties for social media firms
Significant social media intermediaries face tighter obligations. Users must be required to declare whether their content is AI-generated before upload or publication. Platforms must deploy technical and automated tools to verify these declarations.

Once confirmed as AI-generated, the content must carry a clear and prominent disclosure flagging its synthetic nature.

The takedown clock speeds up
The most dramatic shift lies in timelines. The compliance window for lawful takedown orders has been cut from 36 hours to just 3 hours. Grievance redressal timelines have been halved from 15 days to 7.

Advertisement

For urgent complaints, the response window shrinks from 72 hours to 36. In certain specified cases, intermediaries must now act within 2 hours, down from 24.

Platforms are required to act swiftly once aware of violations involving synthetic media, whether through complaints or their own detection. Measures can include disabling access, suspending accounts and reporting matters to authorities where legally required.

Importantly, the government has clarified that removing or disabling access to synthetic content in line with these rules will not jeopardise safe-harbour protection under Section 79(2) of the IT Act.

Advertisement

The message is unmistakable. As AI blurs the line between real and fabricated, the state is racing to keep pace. For platforms, the era of leisurely compliance is over. In India’s digital marketplace, synthetic content now comes with very real consequences.

Continue Reading

Advertisement News18
Advertisement Whtasapp
Advertisement All three Media
Advertisement Year Enders

Copyright © 2026 Indian Television Dot Com PVT LTD