Facebook adds new brand safety controls, including topic exclusions for video ads
Facebook announced new brand safety controls for video advertisers, with subject exclusions, based on machine learning, and publisher allowlists to better control the display of the campaign.
The first is Topic Exclusions, which gives video advertisers a new way to control which video posts their ads can appear in, based on video content.
As Facebook explains:
“Exclusion of subject will provide in-stream advertisers with a more granular opt-out tool that enables content-level matching. Powered by machine learning technology, topic exclusion is designed to allow in-stream advertisers to choose content-level exclusions from four different topics: news, politics, gaming, and religious and spiritual content.
As you can see in the screenshot above, advertisers will be able to prevent their ads from appearing in video uploads linked to these content areas, although the same limitations do not apply to live streams. .
Facebook didn’t provide technical details on how the system determines which videos fall into each category (other than the above note that it’s “powered by machine learning”), but the assumption would be that It also uses the historical context for each uploader, their Page Rating, Post Comments and Engagement to determine the purpose of each video.
The other addition is “pAuthorization lists of editors”, which will give advertisers the possibility of selecting a specific list of publishers to whom they wish to distribute their ads.
This will allow advertisers to manage their campaigns exclusively on the content of these publishers.
Brand safety controls were developed in 2017 after YouTube lost millions in ad revenue when publishers began pulling ads for appearing alongside extremist content and hate speech. Of course, the correct answer would be for YouTube and other platforms to remove extremist content and hate speech outright, but with varying levels of risk due to the brand’s association with different types of content, all digital platforms have since worked to add new content. Placement control options to stop unwanted connections.
It’s also worth noting that Facebook and YouTube have also moved to take more action against this content, but brand safety controls like this give advertisers a greater ability to better protect themselves from these concerns, by putting more control in their hands, as opposed to just relying on the platforms and their tools.
On top of that, Facebook also recently upgraded its third-party audit status, being added to the inaugural group of Trustworthy Accountability Group Brand Safety certified companies.
“We also worked with the Global Alliance for Responsible Media (GARM) to align on brand safety standards and definitions, scaling education, common tools and systems, and independent industry monitoring We have aligned with GARM on the definitions of the 11 categories, including hate speech and acts of aggression, which are included in the GARM/4A Brand Safety Floor and Suitability Framework. “
These broader associations allow digital platforms to better align with accepted benchmarks and practices, and weed out problem groups en masse, rather than each platform going it alone.
This approach helps set new industry standards and enables further action on these concerns, ultimately giving advertising partners more control and assurance.
You can learn more about the latest Facebook brand security updates here.