Facebook’s VIP ‘Whitelist’ Reveals Two Big Problems

Facebook has a couple big problems when it comes to filtering out the often misleading and dangerous stuff that users post on the social network. First, its artificial intelligence doesn’t work. Second, the company doesn’t want to admit this, because hiring humans to do proper moderation would undermine its business model. The combination should have legislators and shareholders very worried.

An investigation by Jeff Horwitz at The Wall Street Journal has shed new light on Facebook’s duplicity: Even as executives publicly claimed that their automated moderation was applying the same rules to all users, the company was actually giving special treatment to celebrities and politicians. Such “whitelisted” accounts were handled by humans, who allowed inflammatory and misleading posts that algorithms otherwise would have censored — including a call to violence from then-President Donald Trump.

Why the two-tiered approach? For one, it seems Facebook recognizes that its algorithms are glitchy, and it’s fine with foisting them upon regular users but doesn’t want to aggravate influencers, who might complain loudly and publicly. Beyond that, and perhaps more important, incendiary posts by famous people generate a lot of engagement, and hence advertising revenue. The whitelist was a way of quietly addressing these issues while maintaining the fiction that the AI was actually working.

To some extent, Facebook merely reflects the broader application of technology in society. In many realms, the elite get the human touch while the rest get the algorithm. People with ivy-league backgrounds find jobs through their friends, while others must contend with hiring algorithms that funnel them into less prestigious positions, even if they know their stuff. Applicants to top-tier colleges get personal interviews, while other schools use enrollment algorithms that systematically reduce scholarship awards.

Visit Us On TwitterVisit Us On Facebook