eNews
Google takes down 1.7 bn. ads for violating policies
MUMBAI: In 2016, Google took down 1.7 billion ads that violated its advertising policies, more than double the amount of bad ads it took down in 2015, according to the latest ‘Better Ads Report’ for 2016 released by the company.
“A free and open web is a vital resource for people and businesses around the world. And ads play a key role in ensuring you have access to accurate, quality information online. But bad ads can ruin the online experience for everyone. They promote illegal products and unrealistic offers. They can trick people into sharing personal information and infect devices with harmful software. Ultimately, bad ads pose a threat to users, Google’s partners, and the sustainability of the open web itself,” said Sustainable Ads Product Management director Scott Spencer.
Last year, Google did two key things to take down more bad ads. First, it expanded the company’s policies to better protect users from misleading and making predatory offers. For example, in July it introduced a policy to ban ads for payday loans, which often result in unaffordable payments and high default rates for users. In the six months since launching this policy, Google disabled more than five million payday loan ads.
Second, it beefed up its technology to spot and disable bad ads even faster. For example, “trick to click” ads often appear as system warnings to deceive users into clicking on them, not realizing they are often downloading harmful software or malware. In 2016, Google detected and disabled a total of 112 million ads for “trick to click,” 6X more than in 2015.
According to the report, most common inappropriate online ads were those for illegal products. Google disabled more than 68 million bad ads for healthcare violations and 17 million ads for illegal gambling violations in 2016.
Protecting consumers against misleading ads that try to drive clicks and views by intentionally misleading people with false information like asking `Are you at risk for this rare, skin-eating disease?’ or offering miracle cures like a pill that will help people lose 50 pounds in three days without lifting a finger, Google took down nearly 80 million bad ads for deceiving, misleading and shocking users in 2016.
As for ads developed exclusively for the mobile web, Google’s systems detected and disabled over 23,000 ‘self-clicking ads’ on its platforms as compared to only having to disable a few thousand of these bad ads last year. Similarly, the report highlighted a dramatic increase in scamming activity in 2016 and approximately 7 million bad ads were disabled for intentionally attempting to trick the Google detection systems.
2016 also saw rise of a new type of scammers called `tabloid cloakers’ that take advantage of current trends and hot topics: a government election or a trending news story or a well-known celebrity. The ads used by these scammers may look like headlines for real articles on a news website but when clicked upon, consumers are redirected to a site selling weight loss products. In 2016, Google suspended over 1,300 accounts for `tabloid cloaking’. In December alone, Google took down 22 `cloakers’ that were responsible for ads seen over 20 million times by people online in a single week.
Over the years, Google has been working to find ads that violate its policies and blocks the ad or the advertiser, depending on the violation. In 2016, it took action on 47,000 sites for promoting content and products related to weight-loss scams. It also took action on more than 15,000 sites for unwanted software and disabled 900,000 ads for containing malware. Around 6,000 sites and 6,000 accounts were suspended for attempting to advertise counterfeit goods, like imitation designer watches.
In order to keep Google’s content and search networks safe and clean, Google has introduced stricter policies, including the new AdSense mis-representative content policy. The policy update introduced in November 2016, enables the company to take action against website owners misrepresenting who they were and deceiving users with their content.
eNews
AI could replace half of entry-level white-collar work: Anthropic study
Hiring in AI-exposed occupations fell 14 per cent post-ChatGPT
SAN FRANCISCO: From lamplighters to elevator operators, waves of technology have repeatedly erased once-common jobs. Now artificial intelligence may be poised to do the same for large swathes of professional work.
A new study by Anthropic suggests that while AI tools are technically capable of performing many knowledge-economy tasks, real-world adoption lags far behind that potential, at least for now.

The report, Labor market impacts of AI: A new measure and early evidence, by Maxim Massenkoff and Peter McCrory, introduces a new metric called “observed exposure,” which compares what AI systems could theoretically perform with what they are actually doing in workplaces.
Using professional interaction data from Anthropic’s Claude model, the researchers found that AI could theoretically cover a wide share of tasks in business, finance, management, computing, mathematics, legal services and office administration. Yet current adoption represents only a small fraction of those capabilities.
That gap between potential and reality reflects a mix of legal barriers, technical limitations and the continued need for human oversight, the study said. But the authors suggest those constraints may prove temporary as the technology matures.
Warnings about AI’s impact on white-collar employment have been growing. CEO Dario Amodei has previously argued that AI could disrupt as much as half of entry-level professional work, while Microsoft AI CEO Mustafa Suleyman has suggested that most professional tasks could eventually be automated within 12 to 18 months.
Highly educated workers most exposed
Contrary to common assumptions, the study finds that workers most exposed to AI are not those in manual labour but highly educated professionals. The most exposed group is 16 percentage points more likely to be female, earns on average 47 per cent more than the least exposed group and is nearly four times as likely to hold a graduate degree.
Occupations including computer programmers, customer service representatives and data entry clerks are among the most vulnerable to automation.
Yet even in highly exposed fields, AI is not yet replacing jobs at scale. The researchers cite routine medical tasks, such as authorising prescription refills, as examples that AI could technically perform but is not widely observed doing in practice.
In the report’s visual framework, actual AI usage (the “red area”) remains far smaller than the theoretical “blue area” of possible tasks. Over time, the researchers expect the red area to expand as adoption deepens.

At the other end of the labour market, roughly 30 per cent of occupations show virtually no AI exposure. Roles such as cooks, mechanics, bartenders and dishwashers still depend heavily on physical presence and manual work that large language models cannot replicate.
Hiring slowdown rather than layoffs
So far the clearest labour-market signal is not mass layoffs but a slowdown in hiring within AI-exposed occupations.
According to the study, job-finding rates in those sectors have fallen about 14 per cent since the arrival of generative AI tools such as ChatGPT compared with 2022 levels. A separate study cited by the authors found a 16 per cent drop in employment among workers aged 22 to 25 in AI-exposed roles.
Recent labour data from the US Bureau of Labor Statistics also point to softer hiring conditions, with employers shedding 92,000 jobs in February and unemployment rising to 4.4 per cent.
Some companies have already linked layoffs to automation. Jack Dorsey said his payments firm Block recently cut nearly half its workforce in part because AI tools allow smaller teams to operate more efficiently.
Not everyone is convinced the technology is solely responsible. Critics such as Marc Benioff have accused some firms of “AI washing”, using automation as a convenient explanation for cost-cutting measures.
Still, the researchers warn that the longer-term risk is a potential “white-collar recession”. If unemployment in the most AI-exposed occupations were to double, from about 3 per cent to 6 per cent, it would mirror the scale of labour-market disruption seen during the Global Financial Crisis.
For now, the shift may simply mean fewer entry-level openings. Some young workers are staying longer in existing roles, switching sectors or returning to education rather than entering AI-exposed fields.






