Facebook is reported to be creating teams to explore and ultimately address how the algorithms that power both its core platform and Instagram, with a particular focus on how their machine learning components may have become implicitly biased due to flawed data.
The Wall Street Journal reports that the company is forming an “equity and inclusion team” at Instagram, with an equivalent team at Facebook who will examine how minority users are affected compared with white users.
Black Lives Matter
This article is part of an ongoing WARC series focused on educating brand marketers on diversity and activism, in light of the recent progressive steps made with the Black Lives Matter movement.
“Any bias in our systems and policies runs counter to providing a platform for everyone to express themselves”, Vishal Shah, head of product at Instagram told the Journal.
According to the Journal, account moderation software that automatically makes suspension decisions disproportionately affects Black users.
Given the platform’s colossal influence, the fact also that such a tiny minority of the Facebook workforce is Black (less than 4%), this is a topic that is likely to have flown under the radar, though the company has announced initiatives to increase the number of senior employees from minorities by 30% over the next five years.
The topic of racial bias has surrounded the company for some time, not only in the form of hate speech that sparked a boycott of the service, but as a topic that sources say could not be studied without express permission from the very top of the network.
That racism in the US (and around the world) has now become a mainstream issue appears to have spurred Facebook to examine an issue that the company has kept quiet.
Serious academic work, and aggressive reporting has shown that race and gender not only skews advertising, but that Facebook was enabling advertisers to target on the basis of race, a practice that the company has now stopped.
Sourced from the Wall Street Journal, WARC, ProPublica