On YouTube's engagement-driven recommendations

April 3, 2019 (792 words) :: Reflections on today's Bloomberg piece about YouTube's unwillingness to stop recommending toxic videos in order to preserve high 'engagement'.
Tags: big-tech

This post is day 93 of a personal challenge to write every day in 2019. See the other fragments, or sign up for my weekly newsletter.


My writing energy for today is spent from working on a commissioned piece (coming out soon!) so I’m just going to do a quick news comment. Yesterday, Bloomberg came out with an article demonstrating the moral bankruptcy of huge tech companies driven by profit: YouTube Executives Ignored Warnings, Letting Toxic Videos Run Rampant.

[…] In recent years, scores of people inside YouTube and Google, its owner, raised concerns about the mass of false, incendiary and toxic content that the world’s largest video site surfaced and spread. One employee wanted to flag troubling videos, which fell just short of the hate speech rules, and stop recommending them to viewers. Another wanted to track these videos in a spreadsheet to chart their popularity. A third, fretful of the spread of “alt-right” video bloggers, created an internal vertical that showed just how popular they were. Each time they got the same basic response: Don’t rock the boat.

The company spent years chasing one business goal above others: “Engagement,” a measure of the views, time spent and interactions with online videos. Conversations with over twenty people who work at, or recently left, YouTube reveal a corporate leadership unable or unwilling to act on these internal alarms for fear of throttling engagement.

All of this is old news, of course, but still, there’s something astounding about a company embracing such an openly user-hostile metric as their objective function. Getting people addicted to your videos is so clearly bad for them, partly because of the prevalence of extremist content with horrifying real-world consequences, and partly because cultivated addiction is just bad, full stop. Nobody thinks tobacco companies are the good guys anymore, and there’s a special place in hell involved for anyone who helped Purdue Pharma get people addicted to opioids. It baffles me that anyone can still see Google as a mostly good company, at this point.

Since 2017, YouTube has recommended clips based on a metric called “responsibility,” which includes input from satisfaction surveys it shows after videos. YouTube declined to describe it more fully, but said it receives “millions” of survey responses each week.

This is just hilarious. How will a satisfaction survey do anything to counter the fact that YouTube actively recommends videos that uphold reactionary views? “Did you find this video of Ben Shapiro destroying a feminist SJW to be satisfactory? Yes? Okay great, we’ll show you more!”

The problem is not that some people may be unsatisfied with their recommendations (though, to be fair, I am perpetually unsatisfied with my personal recommendations on YouTube; they’re almost comically bad). The problem is that YouTube is promoting toxic videos to people who turn out to find them satisfying, because hatred is appealing when the political status quo is tepid and you haven’t been exposed to a better alternative. Whether this was a deliberate algorithmic choice or not, it sort of doesn’t matter; the fact is that the net effect of these algorithmic choices is incredibly toxic. Intent doesn’t really matter when the outcome is this bad.

There’s a broader problem here, which the Bloomberg article doesn’t really get in to. What would a better recommendation algorithm look like, if it were driven by something other than engagement? It’s not entirely clear what the answer is, to me. YouTube’s broken recommendation system points to the difficulty of answering larger societal questions: what sort of culture we want, and what sort of political views we think are valid.

These are tough questions, and they should probably be deliberated democratically, not left up to the whims of various Google executives. Or just like appoint an outside council of academics and researchers and policy experts - even technocracy would be better than the shitshow we have now, where decisions are made by corporate fiat in the hopes of enhancing shareholder value, without any real democratic accountability.

I don’t have a pithy slogan here, like, “We need a People’s YouTube!” I mean, we do, but that’s not going to solve the problem by itself. If reactionary content is being created, for whatever reason, then it’ll probably be uploaded to sites like YouTube, too. The long-term solution is to improve society enough - and to attain a high-enough level of political education - such that reactionary content is less likely to be created in the first place. In the meantime, though, we can at least have algorithms that don’t signal-boost toxic content, maybe with the help of human moderators (who, ideally, would be paid really well for their trouble).

Not quite People’s YouTube, but at least, Less Toxic YouTube. Given the outsize cultural role that YouTube fills for many people (especially younger people), that in itself would be a worthy goal.


« See the full list of fragments