Generative AI becomes defining trend in digital Ad industry, Google report shows
March 27, 2024344 views0 comments
Onome Amuge
The use of generative AI emerged as a defining trend in the digital advertising industry in 2023. As explored in Google’s latest Ads Safety Report, the technology redefined how brands and advertisers interacted with consumers, changing the face of digital advertising through renewed innovations in performance optimisation to image editing, reshaping the way brands and advertisers connect with consumers
The report, which highlights Google’s progress in enforcing its policies for advertisers and publishers, also discussed the challenges posed by generative AI, particularly in the areas of authenticity, transparency, and accountability. The report acknowledged that these are issues that need to be addressed, and outlined the steps that Google is taking to do so.
In addition to its transformative potential, the report also recognized the significant benefits that generative AI presents for its enforcement efforts. In particular, the report highlighted the use of Large Language Models (LLMs), a type of generative AI, as a powerful tool for detecting and preventing unsafe advertising content.
How GEN AI bolsters enforcement
Read Also:
- FG partners Google, Del York Creative Academy to empower 10,000 young…
- Google,Vodafone forge 10-year pact to bring AI, cloud tech…
- Sahara Group highlights three-pronged net zero plan in 2023…
- Nigeria’s Islamic finance industry portfolio hits $3.8bn-CBN
- Nigerian banking industry braces for more outages as GTBank undertakes…
Google’s safety teams have been using AI-driven machine learning systems to enforce its policies at scale for years, and the report highlights the importance of this approach.
“It’s how, for years, we’ve been able to detect and block billions of bad ads before a person ever sees them. But, while still highly sophisticated, these machine learning models have historically needed to be trained extensively – they often rely on hundreds of thousands, if not millions of examples of violative content,” the tech giant stated.
According to Google, the scammers behind ads promoting unreliable financial claims have become more and more sophisticated in their approach. They use a variety of tactics to evade detection, including tailoring their ads around new financial products or services, such as digital currencies or investment advice. This makes it challenging to detect and remove this type of content using traditional machine learning models. Though traditional machine learning models are trained to detect these policy violations, the fast-paced and ever-changing nature of financial trends make it, at times, harder to differentiate between legitimate and fake services and quickly scale automated enforcement systems to combat scams.
Google acknowledged that LLMs are more capable of quickly recognising new trends in financial services, identifying the patterns of bad actors who are abusing those trends and distinguishing a legitimate business from a get-rich-quick scam. This, it stated, has helped its teams become even more nimble in confronting emerging threats of all kinds.
“We’ve only just begun to leverage the power of LLMs for ads safety. Gemini, launched publicly last year, is Google’s most capable AI modeI. We’re excited to have started bringing its sophisticated reasoning capabilities into our ads safety and enforcement efforts,” Google stated.
Preventing fraud and scams
The report noted that scams and fraud became an increasing problem across the internet in 2023, not just on Google’s platforms. This is because bad actors are constantly evolving their tactics to try to evade detection. In response, Google has taken a multi-pronged approach to combating scams and fraud. This includes updating policies, creating rapid-response enforcement teams, and enhancing detection techniques. This led to Google launching its Limited Ads Serving policy in November 2023,designed to protect users by limiting the reach of advertisers with whom the platform is less familiar.
Under this policy, Google implemented a “get-to-know-you” period for advertisers who don’t yet have an established track record of good behavior, during which impressions for their ads might be limited in certain circumstances. Ultimately, Limited Ads Serving, which is still in its early stages, is set to help ensure well-intentioned advertisers are able to build up trust with users, while limiting the reach of bad actors and reducing the risk of scams and misleading ads.
The report highlighted the importance of responding quickly to new trends in online abuse, such as the use of deepfakes in ads. When Google detected that these types of ads were being used to scam users, it acted swiftly to address the problem. The company formed a dedicated team to tackle the issue, trained its automated enforcement models to detect similar ads, and began removing them at scale. Google also updated its misrepresentation policy to better address this type of abuse and protect users.
Overall, Google blocked or removed 206.5 million advertisements for violating our misrepresentation policy, which includes many scam tactics and 273.4 million advertisements for violating its financial services policy. It also blocked or removed over one billion advertisements for violating its policy against abusing the ad network, which includes promoting malware.
While Google has made significant progress in the fight against scam ads, the report acknowledged that this is an ongoing battle. Bad actors are constantly evolving their tactics and finding new ways to deceive people, including the use of deepfakes. To continue to protect users, Google pledged to continue investing in detection technology and partnering with organisations like the Global Anti-Scam Alliance and Stop Scams UK. By working together, these organisations can pool their resources and expertise to better protect consumers worldwide.
Google noted that its ultimate goal is to catch bad ads and suspend fraudulent accounts before they make it onto its platforms or remove them immediately once detected. It further acknowledged the fact that AI is improving its enforcement on all these fronts.
The report stated: “In 2023, we blocked or removed over 5.5 billion ads, slightly up from the prior year, and 12.7 million advertiser accounts, nearly double from the previous year. Similarly, we work to protect advertisers and people by removing our ads from publisher pages and sites that violate our policies, such as sexually explicit content or dangerous products. In 2023, we blocked or restricted ads from serving on more than 2.1 billion publisher pages, up slightly from 2022.
“We are also getting better at tackling pervasive or egregious violations. We took broader site-level enforcement action on more than 395,000 publisher sites, up markedly from 2022.”
The report made it clear that AI is having a significant impact on Google’s ability to enforce its policies and protect users. Last year, more than 90% of enforcement actions taken on publisher pages were driven by machine learning models, including the latest LLMs. Despite the effectiveness of these models, Google acknowledges that mistakes can still happen. As such, any advertiser or publisher can appeal an enforcement action if they believe it was made in error. When this happens, Google’s teams will review the case and, if a mistake is found, use it to improve their systems.
Staying nimble and looking ahead
The report highlighted the importance of adaptability when it comes to ads safety. The digital advertising landscape is constantly evolving, with new technology and trends emerging and changing all the time. This can present new opportunities, but also new challenges when it comes to protecting users. To effectively address these challenges, Google explained that it is continuously developing its policies, strengthening its enforcement systems, working with other organizations, and giving users more control over their ad experience.
In conclusion, the report acknowledged that it is impossible to predict what will happen in the coming year. However, Google is confident that its ongoing investments in policy development, detection systems, and enforcement capabilities will ensure that it is prepared for any future challenges.