AD Agencies
India’s advertising watchdog sets new rules on AI-generated content in ads
ASCI’s draft guidelines create a three-tier risk framework that tells brands exactly when to label synthetic content, when to ban it outright and when to leave it alone
MUMBAI: India’s advertising self-regulator has had enough of brands hiding behind artificial intelligence. The Advertising Standards Council of India has released draft guidelines on the responsible labelling of synthetically generated content in advertising. These guidelines establish a clear, risk-based framework that distinguishes between harmless AI use, AI use requiring disclosure, and prohibited AI use. The message to brands is blunt: transparency is not optional, and a label does not make an illegal ad legal.
The guidelines, which align with the Synthetically Generated Images Rules amended on 10 February 2026, define synthetically generated content broadly. It covers any advertisement that is artificially created, modified or materially altered to appear authentic, including deepfakes, synthetic spokespersons, AI voices and materially altered imagery. The focus, ASCI is at pains to stress, is on consumer outcomes rather than the technology itself. AI becomes a problem only when it misleads, exploits the vulnerable, depicts unsafe situations or replicates a real person’s likeness without consent.
The three tiers: banned, labelled and left alone
The framework divides AI advertising into three risk categories, each with sharply different consequences.
At the top sits high-risk content, which is flatly prohibited regardless of whether a brand slaps an AI label on it. This includes fabricating endorsements or testimonials, exaggerating product results through misleading visuals, creating fake locations that appear realistic, using deepfakes or a person’s likeness without consent, and deploying AI-generated fictional authority figures — such as a fake doctor promoting a health supplement — to imply medical expertise that does not exist. The point is important: an AI label does not sanitise content that violates the ASCI Code. A deepfake is a deepfake whether it is disclosed or not.
The middle tier covers medium-risk content, where labelling is mandatory. This is the territory where AI materially influences consumer decisions and the absence of disclosure would mislead. Brands must label advertisements that use virtual or synthetically generated influencers and ambassadors, replicate a real person’s likeness or voice even with their consent for personalised messaging, use synthetically generated visuals for product performance unless those visuals accurately replicate actual performance, and create realistic events or settings entirely with AI. The rules extend further: demonstrating a product that does not yet exist, such as a three-dimensional model of an unbuilt housing complex, requires a label, as does using AI-generated exaggerated sound effects that are highly relevant to a product’s core features — think audio in an advertisement for a headset. Paid or sponsored AI-generated product suggestions carry an additional requirement: they must be specifically labelled as “sponsored by.”
The bottom tier covers low-risk content, which needs no label at all. Routine colour correction, standard blemish removal, minor lighting tweaks, decorative AI-generated backgrounds, abstract skylines, ambient music, jingles and background crowd noise all fall here. So do obviously fantastical elements — dragons, fairies and the like — that no reasonable consumer would mistake for reality. Generating advertising copy, creating audio descriptions for the visually impaired and preparing documents in good faith are also exempted.
What the label must say
Where disclosure is required, brands may use standard formulations such as “Audio/Video created using AI” or “Audio/Video enhanced using AI”, or any alternative that accurately informs the consumer. Labels must follow ASCI’s existing disclaimer guidelines. The guidelines add a final, pointed caveat: synthetically generated content may still be considered misleading or objectionable even when correctly labelled, if the end effect is likely to mislead or harm the consumer. The label, in other words, is a floor, not a ceiling.
What it means for the industry
The guidelines land at a moment when AI is moving fast through Indian advertising, from synthetic brand ambassadors on social media to AI-generated product demonstrations on e-commerce platforms. The framework gives brands a workable decision tree: assess the risk, apply the label if required, and do not assume that disclosure excuses deception.
The harder question, as with all self-regulatory frameworks, is enforcement. ASCI has the guidelines. Whether brands follow them is another matter. India’s consumers, increasingly savvy about synthetic media, may well become the most effective enforcement mechanism of all.
AI in advertising is here to stay. ASCI has just told the industry exactly what the rules are. Now it has to play by them.







