How censorship works in YouTube comments

How censorship works in YouTube comments

YouTube, that huge platform for sharing videos halfway between a cultural enclave and a marketing tool . An entity so huge that it only brings together practically half of the total users on the network or, in other words, almost 30% of the world's population.

Although in the beginning its policy was more lax when it came to imposing its criteria on what could be published or posted, the truth is that, for some time now, YouTube has been hardening its own rules to avoid certain sensitive topics . Without going any further, a few weeks ago it updated its monetization policy on 'sensitive events' again, to include any reference to the current health crisis on its blacklist.

But how can Youtube define comments that are inappropriate? Who or what defines which words and phrases are potentially 'dangerous'? Step number one is to understand the mechanisms that the platform uses to try to understand how censorship works in YouTube comments .

censorship

How does censorship work on YouTube?

Due to the more than 2 billion active users on YouTube, a way to automate the control over the activity of all these profiles is necessary. This mechanism takes the form of a highly sophisticated computerized system controlled by an AI that is responsible for tracking, profiling and classifying the content and activity of all those who are on the site.

This impressive system uses machine learning to detect not only words, but also expressions that could be inappropriate .

We are not just talking about an algorithm in the strict sense, which simply blocks a certain comment based on a list of 'forbidden' words. That goes, this goes much further. Rather, we are talking about a computational model based on machine learning that takes millions of samples of comments previously moderated by the owner of a channel or another user.

Don't get us wrong, of course there is a list of forbidden words and controlled by an algorithm, but no less important are all the tools and resources that YouTube makes available to the community to report certain inappropriate behaviors. All this input that comes from users is what the platform's data analysis system feeds on to control activity within the community in a much more efficient and precise way.

In other words, the big brother is a combination of algorithm that automates keyword search, machine learning, and responsible action from the YouTube community. All those terms and expressions that will potentially be subject to censorship are included in the 'Security regulations' section of the platform.

This whole set of rules tries to avoid particularly harmful , dangerous, pornographic, hateful, copyrighted, etc. content .

censorship

Censorship in numbers

To give you an idea of ​​how effective the tools described above are, you just have to take a look at the number of comments deleted only in the last quarter of 2019 (the last one on record). We are talking about a total of 541 million deleted comments .

You can also take a look at the percentage of comments deleted by YouTube's own system and those deleted or blocked directly by the youtubers themselves (who are not users). Another interesting fact is the percentage of comments deleted based on a specific topic.