YouTube algorithm keeps recommending ‘regrettable’ videos

YouTube users have reported potentially objectionable content in thousands of videos recommended to them using the platform’s algorithm, according to the nonprofit Mozilla Foundation.

The findings, released Wednesday, revealed many instances of YouTube recommending videos that users had marked as “regrettable” — a broad category including misinformation, violence and hate speech.

The 10-month-long investigation used crowdsourced data gathered by the foundation using an extension for its Firefox web browser as well as a browser extension created for Chrome users to report potentially problematic content.

Mozilla gathered 3,362 reports submitted by 1,622 unique contributors coming from 91 nations between July 2020 and June of this year.

The nonprofit then hired 41 researchers from the University of Exeter to review the submissions and determine if they thought videos should be on YouTube and, if not, what platform guidelines they may violate.

Researchers found that 71 percent of videos flagged by users as regrettable came from YouTube’s own recommendations. Those videos also tended to be much more popular than others viewed by volunteers, suggesting the company’s algorithm favored objectionable content.

Nine percent of the regrettable videos, which Mozilla said accumulated 160 million views, were later pulled by YouTube for violating platform policy.

“YouTube needs to admit their algorithm is designed in a way that harms and misinforms people,” said Brandi Geurkink, senior advocacy manager at Mozilla. 

Wednesday’s investigation also found users in non-English-speaking countries were exposed to regrettable videos at a 60 percent higher rate, with the highest ratio of incidences coming from Brazil, Germany and France.

A spokesperson for the platform told The Hill that it “constantly” works to improve user experience.

“[O]ver the past year alone, we’ve launched over 30 different changes to reduce recommendations of harmful content,” they continued.

YouTube’s algorithm, which it uses to recommend hundreds of millions of hours of videos to users every day, is a notorious black box that researchers and academics have so far been unable to access.

The Mozilla Foundation created the browser extension last year in an effort to let users report problematic content on the platform in order to get a better grasp on the content it serves up to viewers.

YouTube in 2019 said it made a series of 30 unnamed tweaks to its recommendations system for users in the U.S. that reduced watch time of borderline content — videos which toe the line between acceptable and violating platform policy — by 70 percent among non-subscribers.

In 2021, for the first time it disclosed its “violative view rate,” or the percentage of views that come from content that runs afoul of community guidelines, at between 0.16 and 0.18 percent. At least one billion hours of videos are watched daily on YouTube, the company has claimed.

YouTube has been criticized for providing minimal evidence or data to back up its claims of detoxifying the algorithm.

Mozilla’s report calls on the platform to let researchers audit its recommendation algorithm and provide users a way to opt out of personalized suggestions.

It also urges lawmakers to step in to compel a base level of transparency given how secretive platforms have been of their artificial intelligence technology.

Tags algorithm online misinformation YouTube

Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed..

 

Main Area Top ↴

Testing Homepage Widget

More Technology News

See All

 

Main Area Middle ↴
Main Area Bottom ↴

Most Popular

Load more

Video

See all Video