Since the Facebook ad boycott there has been a push to civilize social media, with renewed focus on brand safety, but there is only so much platforms can do to eliminate all the potentially offensive and misleading material surging through their pages and videos.
The best brands might hope for on social, it seems, is a well-lit place, if not a totally safe space.
An early draft of one proposal from an influential industry group, Global Alliance for Responsible Media, seems to recognize these limitations. The group, known by its acronym GARM, has been circulating a rough cut of new rules it is drafting in the wake of the Facebook boycott.
The proposal attempts to define offensive content, push platforms to prove how prevalent that content is on their services, and give brands transparency about when ads appear next to that content.
“Irrespective of harmful content being posted, every marketer should have the ability to manage the environment they advertise in and the risks,” says one slide in GARM’s proposal, which was obtained by Ad Age and marked “confidential.”
GARM declined to discuss the proposal since it was not finished with the work. Ad Age reviewed six pages of the draft and they show that GARM is working to solidify its plans later this month. The plans cover defining hate speech, grading platforms on their ability to police harmful content, conducting audits to prove that platforms are taking the actions they promise, and giving advertisers more controls.
Read Full Article On Ad Age.