Two years ago, Jack Dorsey, the CEO of Twitter, announced he would take steps to control the spread of hate speech, harassment, fake news and conspiracy theories. The network has already promoted some measures such as hiding the responses of tweets, and this March has tightened its rules against hateful behavior. Today, language that is dehumanizing because of age, religion, disability or illness is banned. According to Twitter, dehumanizing language increases the risks of harm outside the Internet, as some research claims.

Why is there this kind of toxic conversation on Twitter?

Unlike many of its competitors like Instagram, the environment and role development is quite different. Generally speaking, audiovisual platforms are more friendly than textual ones, and although Twitter has integrated audiovisual content at a fast pace over the last few years, it was born as an eminently textual social network and that is something that imprints a certain character.

The limitation of extension in its publications: although it is one of the differentiating elements of this platform since its creation, it makes the messages spread through it tend to simplification excluding nuances, which in turn leads to polarization and a very radicalized confrontation.

Ubiquity leads to immediacy and often irrationality: Twitter was the first mainstream social network originally conceived for use on mobile devices, and ubiquity brings immediacy, which often leads to thoughtless, visceral and non-empathetic interventions. Immediacy and the generation of instantaneous responses lead to the development of the most emotional or irrational component, and the fact that these hate messages can be seconded by other users can increase this type of unwanted behaviour.

The call effect: if a platform is full of trolls and haters, the lovers of conflict and harassment feel at ease, while those who hate hate hate speech and malevolence end up leaving the network because they are fed up, which causes the percentage of toxic users in relation to the total number of users to increase substantially.

Spiral effect towards transgression: social networks generate a kind of spiral effect as a result of the saturation by which one must be increasingly transgressive in order to generate the same attention. This means that the inherent features of Twitter that in some way could already be conducive to the presence of toxic discourses have been accentuated very quickly.

The anonymity of Twitter: this network has traditionally been more permissive than other networks when it comes to censoring certain content and verifying the identity of users, which has given wings to the bad practices of those users who hide behind misunderstood tolerance and anonymity to camp out on this platform. The profiles verified in 2019 have fallen by 6.6% compared to 2018, according to a report by The Social Media Family.

Hiding answers does not lower the level of toxicity

Given this scenario, the company has tried to implement some control measures to reduce the toxicity as the option to hide answers from its tweets. This measure can minimize the noise generated in a conversation or the fact that certain behaviors are seconded, although it has a restricted scope in that it can remain accessible. According to the platform itself, in Canada, where the feature was initially tested, 27% of people said they would “reconsider how they would interact with others in the future” after receiving a warning that their responses had been hidden by the original tweete; the remaining 73% didn’t care about.

The measure is double-faced and short-sighted; it will not have a significant impact on reducing the toxicity of certain contents. It is simply an intermediate option between blocking a user or denouncing a content, at one extreme, and accepting any comment no matter how much we dislike it, at the other. Moreover, the author of the tweet with this measure shows some vulnerability because he will have to explain what he dislikes, and then anyone will be able to track those hidden messages to extract patterns of action and draw conclusions.

The platform has also come up with other ideas that may appear in the future: an option to disable retuits, remove a tag or not allow people to be mentioned without permission.

LEAVE A REPLY

Please enter your comment!
Please enter your name here