Tuesday, April 2, 2019

YouTube's Algorithmic Rabbit Hole


            Are Youtube’s recommended videos a rabbit hole? In this article, Youtube’s product chief, Neal Mohan answers questions on the recommendation algorithm which is accused of pushing users to more and more extreme content. YouTube says that these recommendations account for about 70 percent of total time users spend on the site. YouTube product Chief Neal Mohan says that almost 2 billion come to the site every month. That is a lot of power that YouTube has and it can push and steer people toward the content that “grabs the most attention” and ultimately bring in more money. Mohan said in the interview that Youtube’s algorithm does not take into account whether the recommended video is more or less extreme. This makes it clear that the algorithms are lacking ethical content reason. It is most likely that YouTube just wants to put a list of what you are most likely going to want to watch, and more extreme content is more than likely going to grab your attention most and cause you to click and watch it.

            There is clearly and obviously way too much content being uploaded to YouTube for each one to be monitored right away. This has led to much questioning and demanding that YouTube tighten its ethical securities and regulations. For example, YouTube has taken down all ISIS content but haven’t taken down white-supremacism or violent right-wing extremism. This is because YouTube would be shut down immediately if they promoted the ISIS videos, but with the violent extremism case, the line is more blurry between what is hate speech and what is political, so YouTube can get away with not taking down most of that content, because it is harder to spot.

            The bottom line is that youtube’s algorithms are motivated by monetization of its content and will not care if that content is extreme, hurtful, wrong, or moral. YouTube will not do much to reduce its role of spreading mass amounts of controversial and/or extreme content. YouTube collects info and data points on its users and sees what you will watch. For instance, if you watch a right- or left-wing political video that is a valuable data point. If you select a neutral political video, that is a less valuable data point. If you keep selecting one side of the political spectrum, that is more valuable data because it can recommend more videos like that. It is less valuable data if you watch both sided political videos because it shows you have no preference and would be harder to predict what you’d want to watch next. When Neal Mohan basically deflected the questions about the rabbit hole algorithm, saying that the recommended list could lead to more or less extreme videos, he is essentially averting us from their ethical practices and abdicating responsibility. YouTube should be more open with their algorithm and take more responsibility to what they recommend especially to the youth.




Source:

https://www.nytimes.com/2019/03/29/technology/youtube-online-extremism.html#commentsContainer

1 comment:

Bobby Chambers said...

I thought the topic of your blog was very interesting because I often wonder how YouTube’s algorithm recommends videos. However, I had never previously thought about the ethical issues that arise as a result of these recommendations. There is a very blurry line between what ought to be considered hateful rhetoric without infringing on freedom of speech. I think YouTube ought to take more steps to limit the amount of hateful rhetoric on their site. As you stated in your article: “The bottom line is that youtube’s algorithms are motivated by monetization of its content and will not care if that content is extreme, hurtful, wrong, or moral”. I believe an easy step for YouTube to take would be to remove monetization from videos that are seen as controversial. YouTube could allow people to report a video as extreme and then workers at YouTube could determine the next steps that ought to be taken whether it is taking down the video or not allowing it to be monetized. Therefore, people will not be able to profit off of extreme ideas. Another possible solution would be to limit a video from appearing in recommended feeds. This would inhibit a channel’s ability to spread messages of hate to larger audiences. However, as of now an article from The Tartan describes: “YouTube’s algorithms act as a double-edged sword, only suggesting relevant videos to its users: those who don’t affiliate with the alt-right don’t see their videos, while those who do see proportionally more of them. While this shields the majority from controversial content, it also creates an echo-chamber for the minority, concentrating the evil that’s already there”. Because Youtube’s algorithm causes people to remain in a bubble of ideas as a result of their suggested videos, a lot of people only see one perspective. When these ideas are very extreme, it can be problematic. Therefore, by limiting these videos from appearing in a person’s suggestions, the spread of hateful rhetoric can be limited.

Source: https://thetartan.org/2018/11/5/scitech/youtube-guilty