Google plans using warning label, AI to fight extremist content
June 20, 20171.6K views0 comments
The YouTube owner has announced plans to use new artificial intelligence technology to identify extremist videos. It will also slap warning labels on objectionable content that does not meet its criteria for removal, and make such videos harder to find.
The move comes as tech companies face increased pressure in Europe to better regulate extremist content following a series of terror attacks in Berlin, Paris and London.
Google said it would start by using its “most advanced machine learning research” to identify and quickly remove terrorism-related videos after they are uploaded to its platforms. It also said it would bring new human resources to bear by expanding a program that allows trusted third-party organizations to flag extremist content.
Read Also:
The biggest change affects videos with inflammatory religious or supremacist material that do not violate the site’s policies. Google will now put warning signs on such videos, while preventing users from commenting on or endorsing them. The videos will be harder to find and they will not carry advertisements.
“We think this strikes the right balance between free expression and access to information without promoting extremely offensive viewpoints,” said Walker.
Google said it would also try to reach potential ISIS recruits with targeted advertising that directs them to “anti-terrorist videos that can change their minds about joining.” The tech giant said that previous trials of the system resulted in users watching over half a million minutes of videos that debunk terrorist recruiting messages.
Previous efforts by Silicon Valley firms to get a grip on extremist content have failed to impress observers in Europe.
Google faced an advertiser exodus in recent months after companies discovered their spots were appearing alongside extremist content on YouTube. Marriott (MAR) and Etihad Airways were among more than a dozen brands to pull their advertisements from the platform.
Regulators have also called on the industry to do more.
Facebook (FB, Tech30), Twitter (TWTR, Tech30), Microsoft (MSFT, Tech30) and Google agreed with Europe’s top regulator last year to review a majority of hate speech flagged by users within 24 hours and to remove any illegal content. A follow-up report this month showed Facebook and YouTube both managed to remove 66% of hate speech posts and videos after they were flagged. Twitter was a laggard, failing to take down the majority of hate speech posts.
In the U.K., a parliamentary committee report published in May accused the industry — including Google — of prioritizing profit over user safety by continuing to host unlawful content.
“The biggest and richest social media companies are shamefully far from taking sufficient action to tackle illegal and dangerous content,” the Home Affairs Committee report said. “Given their immense size, resources and global reach, it is completely irresponsible of them to fail to abide by the law.”
The report called for “meaningful fines” if the companies do not quickly improve.
In April, the German cabinet approved a plan to start fining social media companies as much as €50 million ($56 million) if they fail to quickly remove posts that breach German law.
Courtesy CNN