YouTube is playing around with new algorithms in its English-language territories, on the back of similar experiments in the US which caused a drop in views of questionable content of around 50%.
Questionable content is, of course, largely subjective: one person’s content sewer can be another’s polished opinions of yet another right-wing nut job. But, YouTube considers a number of topics “could misinform users in harmful ways – such as videos promoting a phoney miracle cure for a serious illness, claiming the earth is flat, or making blatantly false claims about historic events like 9/11”.
Whether the effects of watching such garbage are reduced are questionable. It’s still going to be there.
Alongside this, CEO Susan Wojcicki wants to improve its content removal services to make the worst content easier to wipe from the service (good luck with that), and - this could be the biggie - to remove the link between contextual advertising in its current form and the where advertisers wish to place their ads.
As Wojcicki says, “not all content allowed on YouTube is going to match what advertisers feel is suitable for their brand; we have to be sure they are comfortable with where their ads appear”. This is presumably based on pushbacks from advertisers and media buyers - that their ads appear against unsavoury content.
There’s more here.