YouTube is blocking advertisers from using social and racial justice terms like “Black Lives Matter” to target videos. At the same time, the site is allowing targeting for terms including “white lives matter” and “all lives matter,” according to an investigation by The Markup.
Google’s ad policies officially deter advertisers from targeting users based on “identities and belief,” instead encouraging them to focus on “a user’s interests.” But it’s a fine line that the company has struggled to define. Four years ago, companies boycotted YouTube because their ads were appearing alongside hate content. Google responded with new ad policies, which allowed the company to remove ads from offending content.
Now, Google’s advertiser-facing keyword block is having unintended consequences. The company appears to be trying to eliminate the need for retroactive moderation, though it’s not clear which keywords will be blocked and why.
The unwritten policy could help to block ads from appearing on videos that are critical of the Black Lives Matter movement. However, by taking such a simplistic and opaque approach, the company is preventing a number of YouTubers from monetizing their videos. Media companies are being caught up as well, according to the investigation, including news clips from NBC and the Australian Broadcasting Corporation. It’s also frustrating companies interested in using the platform to sponsor those YouTubers, such as Ben & Jerry’s, that have supported the Black Lives Matter movement.
The investigation by The Markup revealed additional discrepancies between how Google treats content aimed at different audiences. For example, Google Ads blocks targeting for the term “Black power,” a phrase frequently linked to the civil rights movement, but allows it for “white power,” which is widely acknowledged to be a white supremacist slogan.
Before the publication presented its findings to Google, the ad platform allowed targeting for “Christian fashion” and “Jewish fashion,” but not “Muslim fashion.” After Google was alerted to the inconsistency, it blocked targeting for any terms related to religious fashion. Google also changed how ad platform users see searches for blocked terms. Where before there was a difference in the site’s code, that difference no longer exists, eliminating the small window of transparency into the unwritten policy.
Facebook’s “interest categories”
Meanwhile, Facebook is continuing to allow advertisers to target individuals that the company has classified as interested in militias. The Tech Transparency Project discovered the issue after it had created a user to track right-wing extremism on the site and used it to follow pages and groups that post election misinformation and calls for violence. Facebook’s algorithms automatically assigned the account to so-called “interest categories” that included “militia” and “resistance movement.” These interest categories are used by advertisers to target individual accounts. A Facebook spokesperson said that the site had removed the targeting terms last summer and was looking into the matter.
Facebook has been under increasing scrutiny for how it oversees its sprawling platform. The company was found to be autogenerating pages for white supremacist groups. The practice was the result of another problematic algorithm the company had developed, one that would create pages when someone added a white supremacist group to their profile as an employer, for example.
Much of the company’s moderation practices rely on algorithms to catch violations, but they appear to be easily subverted—simple misspellings have been enough to throw them off. Pages for militia groups and other right-wing extremists continued to post propaganda long after Facebook banned such content.