Connect with us

Digital

George Noble flags trouble at OpenAI in viral X post

Published

on

MUMBAI: A single viral post has reignited a familiar Silicon Valley anxiety. Is the artificial intelligence boom racing ahead of its own economics?

Over the weekend, investor and commentator George Noble published a lengthy note on X that bluntly questioned OpenAI’s finances, leadership stability and long term prospects. The post quickly gained traction, not because it revealed new filings or announcements, but because it stitched together rumours, analyst estimates and user frustration into a sharply worded warning about one of the world’s most valuable AI firms.

Why it matters goes well beyond OpenAI. The company sits at the heart of the current AI investment cycle, propped up by big tech capital, vast data centres and soaring expectations. If even a fraction of Noble’s claims prove accurate, it would strengthen the argument that AI’s costs are rising faster than its returns, just as competitors close the gap and regulators circle.

Advertisement

Noble argues that OpenAI is struggling to balance three pressures at once: intensifying competition from Google’s Gemini, eye watering compute and energy bills, and growing legal and governance scrutiny. He also suggests that recent product updates have failed to impress users in the way earlier releases did, raising uncomfortable questions about diminishing returns.

OpenAI has not publicly responded to the post. Many of the figures cited are based on analyst estimates or second hand accounts, and supporters say the company is investing aggressively to secure long term dominance rather than chasing short term profit. Even so, the thread has struck a nerve at a moment when investors are already reassessing lofty AI valuations.

Below is George Noble’s post on X, reproduced in full, which has fuelled the debate.

Advertisement

OpenAI IS FALLING APART IN REAL TIME

I’ve watched companies implode for decades. 
This one has all the warning signs.

OpenAI declared “Code Red” in December. 
Altman sent an internal memo telling employees to drop everything because Google’s Gemini 3 is eating their lunch.

Advertisement

Salesforce CEO Marc Benioff publicly ditched ChatGPT for Gemini after using it for two hours.

ChatGPT traffic fell in November. 
Second month-over-month decline of 2025. 
Meanwhile Gemini jumped to 650 million monthly active users.

The company that was supposed to build AGI can’t keep its chatbot competitive.

Advertisement

But the real story is the money.

OpenAI lost $12 BILLION in a single quarter according to Microsoft’s own fiscal disclosures.

Deutsche Bank estimates $143 billion in cumulative negative cash flow before the company turns profitable. 
Their analysts put it bluntly: 
“No startup in history has operated with losses on anything approaching this scale.”

Advertisement

They’re burning $15 million per day on Sora alone. 
$5 billion annually to generate copyright-infringing memes. 
Even Sora’s lead engineer admitted the “economics are currently completely unsustainable.”

Here’s the big math problem nobody wants to discuss.

It’s going to cost 5x the energy and money to make these models 2x better. 
The low-hanging fruit is gone.

Advertisement

Every incremental improvement now requires exponentially more compute, more data centers, more power.

Reports suggest OpenAI’s large training runs in 2025 failed to produce models better than prior versions.

GPT-5 launched to widespread disappointment. 
Users called it “underwhelming” and “horrible.”

Advertisement

OpenAI had to restore GPT-4o within 24 hours because users preferred the old model.

Altman had promised GPT-5 would make GPT-4 feel “mildly embarrassing.” 
Instead, users complained it was worse at basic math and geography.

They’ve released GPT-5.1, GPT-5.2 since. 
Same complaints each time: too corporate, too safe, robotic, boring.

Advertisement

The talent exodus makes this even worse.

CTO Mira Murati. Gone. 
Chief Research Officer Bob McGrew. Gone. 
Chief Scientist Ilya Sutskever. Gone. 
President Greg Brockman. Gone.

Half the AI safety team departed. 
Multiple executives reportedly cited “psychological abuse” under Altman’s leadership.

Advertisement

And now Elon Musk is suing for up to $134 billion.

A federal judge just ruled the case goes to jury trial in April. 
There’s “plenty of evidence” that OpenAI’s leaders promised to maintain the nonprofit structure that Musk funded.

Musk provided $38 million in early funding based on those assurances. 
Now he wants his share of the $500 billion valuation.

Advertisement

OpenAI called it “harassment.” 
But the judge disagreed.

Here’s what I think happens next.

The AI hype cycle is peaking. 
The diminishing returns are becoming impossible to hide. 
Competitors are catching up. 
The lawsuits are piling up.

Advertisement

OpenAI needs to generate $200 billion in annual revenue by 2030 to justify their projections. 
That’s 15x growth in five years while costs keep exploding.

Even Sam Altman admitted investors are “overexcited” about AI. 
His exact words: “Someone is going to lose a phenomenal amount of money.”

If I were running an AI startup with good traction right now, I’d be looking for an exit. 
Sell into the hype before the music stops.

Advertisement

My positioning:

I’m not touching OpenAI-adjacent plays at these valuations. 
The risk profile is astronomical.

If you’re exposed to the Magnificent 7 through AI infrastructure bets, consider trimming.

Advertisement

The gap between promised revolution and delivered reality has never been wider.

The smart money is rotating into sectors where valuations actually reflect fundamentals.

Small and mid-caps are trading near decade lows relative to Big Tech while earnings growth is only marginally lower.

Advertisement

Markets can price risk. 
But they can’t price chaos.

And OpenAI is chaos dressed up in a $500 billion valuation.

Whether the post proves prescient or overblown, it has done its job. It has reopened the uncomfortable question shadowing the AI boom: how long can belief outrun balance sheets?

Advertisement
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Digital

Govt tightens the screws on AI content with sharper IT rules

New norms bring labelling mandates and faster compliance timelines for platforms

Published

on

NEW DELHI: Govt has moved sharply to police the fast-expanding world of AI content, amending its IT rules to formally regulate synthetically generated media and slash takedown timelines to as little as two hours.

The Union ministry of electronics and information technology (MeitY) notified the changes on February 10, with the new regime set to kick in from February 20, 2026. The amendments pull AI-generated content squarely into India’s intermediary rulebook, widening due-diligence, takedown and enforcement obligations for digital platforms.

At the heart of the change is a legal clarification: “information” used for unlawful acts now explicitly includes synthetically generated material. In effect, AI-made content will be treated on par with any other potentially unlawful information under the IT Rules.

Advertisement

Platforms must also step up user warnings. Intermediaries are now required to remind users at least once every three months that violating platform rules or user agreements can trigger immediate suspension, termination, content removal or all three. Users must also be warned that unlawful activity could invite penalties under applicable laws.

Offences requiring mandatory reporting, including those under the Bharatiya Nagarik Suraksha Sanhita, 2023 and the Protection of Children from Sexual Offences Act, must be reported to authorities.

AI-generated content defined
The amendments introduce the term “synthetically generated information”, covering audio-visual material that is artificially or algorithmically created, modified or altered using computer resources in a way that appears real and could be perceived as indistinguishable from an actual person or real-world event.

Advertisement

However, routine and good-faith uses are carved out. Editing, formatting, transcription, translation, accessibility features, educational or training materials and research outputs are excluded so long as they do not create false or misleading electronic records.

Mandatory labelling and metadata
Intermediaries enabling AI content creation or sharing must ensure clear and prominent labelling of such material as synthetically generated. Where technically feasible, the content must carry embedded, persistent metadata or provenance markers, including unique identifiers linking it to the generating computer resource.

Platforms are barred from allowing the removal or tampering of these labels or metadata, a move aimed at preserving traceability.

Advertisement

Fresh duties for social media firms
Significant social media intermediaries face tighter obligations. Users must be required to declare whether their content is AI-generated before upload or publication. Platforms must deploy technical and automated tools to verify these declarations.

Once confirmed as AI-generated, the content must carry a clear and prominent disclosure flagging its synthetic nature.

The takedown clock speeds up
The most dramatic shift lies in timelines. The compliance window for lawful takedown orders has been cut from 36 hours to just 3 hours. Grievance redressal timelines have been halved from 15 days to 7.

Advertisement

For urgent complaints, the response window shrinks from 72 hours to 36. In certain specified cases, intermediaries must now act within 2 hours, down from 24.

Platforms are required to act swiftly once aware of violations involving synthetic media, whether through complaints or their own detection. Measures can include disabling access, suspending accounts and reporting matters to authorities where legally required.

Importantly, the government has clarified that removing or disabling access to synthetic content in line with these rules will not jeopardise safe-harbour protection under Section 79(2) of the IT Act.

Advertisement

The message is unmistakable. As AI blurs the line between real and fabricated, the state is racing to keep pace. For platforms, the era of leisurely compliance is over. In India’s digital marketplace, synthetic content now comes with very real consequences.

Continue Reading

Advertisement News18
Advertisement Whtasapp
Advertisement All three Media
Advertisement Year Enders

Copyright © 2026 Indian Television Dot Com PVT LTD