Facebook is working on ad topic controls

Facebook is developing tools to help advertisers steer their ad placements away from certain topics in its News Feed.

The company said it will begin testing “topic exclusion” controls with a small group of advertisers. He said, for example, that a children’s toy company could avoid content related to “crime and tragedy”, if it so wished. Other topics will include “current affairs and politics” and “social issues”.

The company said developing and testing the tools would take “a large part of the year.”

Facebook, along with players such as YouTube and Google’s Twitter, has worked with marketers and agencies through a group called the Global Alliance for Responsible Media, or GARM, to develop standards in this domain. They’ve worked on actions that contribute to “consumer and advertiser safety,” including defining harmful content definitions, reporting standards, independent oversight, and agreeing to create tools that better manage ad adjacency. .

Facebook’s News Feed tools build on tools running on other areas of the platform, such as streaming video or its Audience Network, allowing mobile software developers to provide in-app advertisements targeted to users based on data from Facebook.

The concept of “brand safety” is important for any advertiser who wants to ensure that their company’s ads are not near certain topics. But there’s also been growing pressure from the ad industry to make platforms like Facebook safer, not just near their ad placements.

The CEO of the World Federation of Advertisers, which created GARM, told CNBC last summer it was an evolution from “brand safety” to focus more on “societal safety.” . The crucial point is that even though ads may not appear in or next to specific videos, many platforms are funded largely by ad dollars. In other words, ad-supported content helps subsidize all ad-free content. And many advertisers say they feel responsible for what happens on the ad-supported web.

This was made abundantly clear last summer, when a slew of advertisers temporarily pulled their advertising dollars from Facebook, asking it to take tougher action to stop the spread of hate speech and misinformation on its platform. -form. Some of these advertisers didn’t just want their ads to stay away from hateful or discriminatory content, they wanted a plan to make sure the content was completely off the platform.

Twitter is working on its own in-feed brand safety tools, it said in December.

Jessica C. Bell