In a study published in Nature, researchers found evidence of a consistent pattern of harm in human conversations across different social media platforms, unaffected by platform type, topic of discussion, or time period. The study found that longer online conversations tend to become more toxic and polarizing, especially when they involve conflicting opinions. Surprisingly, such harmful interactions do not deter users.
While previous research has focused on online polarization, misinformation, and antisocial behavior, a comprehensive look at how unique human behavioral patterns manifest on these platforms is essential. There is still a lack of understanding. A new study aimed to fill that gap by investigating the unique dynamics of toxicity across different digital environments.
“Social media platforms have become central to communication, information gathering and opinion formation,” said study author and full professor of computer science at Sapienza University of Rome. Polarizzazioni: Information, opinions, and information fellas about the devil.
“However, the prevalence of toxicity can impair these processes, negatively impacting users' mental health and the quality of public discussion. We wanted to uncover the underlying mechanisms and potential solutions to mitigate this behavior, ultimately contributing to a healthier digital environment.”
The research team collected data from eight different social media platforms, amassing a total of nearly 500 million comments over a 34-year period. This large dataset included platforms widely used in the public sphere such as Facebook, Twitter, and Reddit, as well as less mainstream platforms such as Gab and Voat. The dataset also includes comments from his USENET, a worldwide decentralized discussion system he founded in 1980, more than a decade before the World Wide Web came online to the general public. I was there.
The comments collected were related to a variety of topics, including politics, news, the environment, and vaccinations. This diversity of topics helped to minimize thematic bias that could influence the nature of online conversations and allowed for a more generalized understanding of toxicity across different discussion contexts. .
“Analyzing multiple platforms is key to separating true human behavior patterns from simple reactions to idiosyncratic online environments,” said co-author Andrea, Professor of Complexity Science at City, University of London. Baroncelli said. “There is so much focus on certain platforms that we forget our humanity. Our research aims to change these attitudes and put a spotlight back on who we are and how we behave. This is an important step.”
To analyze the toxicity of the collected comments, the researchers leveraged the Perspective API, a cutting-edge machine learning tool developed by Google. This API is designed to detect the presence of harmful language. The study defines it as “a rude, disrespectful, or unreasonable comment that may remove someone from a discussion.” This definition allowed researchers to quantify and compare levels of toxicity across different platforms and time frames.
The Perspective API assigns a toxicity score to each comment. Researchers use it to determine the prevalence and distribution of harmful comments within a dataset. By employing such automated tools, researchers are now able to efficiently process vast amounts of data.
One key finding was that the longer an online conversation lasted, the more likely it was to become harmful. This pattern holds true regardless of the social media platform, the topic of discussion, or the historical context in which the conversation takes place. This suggests that as discussions drag on, they tend to develop into more polarized and adversarial interactions.
“Harmful behavior is pervasive on all types of social media platforms and discussions, even in non-polarized settings,” Quattrociocchi told PsyPost. “This suggests that while platform design and conversation topics are essential, there are inherent aspects of online interactions that foster toxicity.”
Contrary to the common assumption that harmful interactions prevent participation, researchers found that toxicity did not prevent users from participating. In fact, users are more likely to remain actively engaged in discussions where harmful comments are prevalent. This finding indicates that a toxic environment may not only fail to drive users away, but may also foster engagement that keeps users returning to the conversation, perhaps through emotional investment or a sense of conflict.
“One of the most surprising findings was that despite harmful comments, conversations often continued rather than abruptly ending,” Quattrociocchi said. “This calls into question the traditional notion that toxicity simply disrupts interaction. This resilience opens new avenues for understanding how people adapt to and manage social media environments. ”
The researchers observed that these patterns of toxicity and engagement were consistent across different social media platforms. This consistency allows the dynamics of online toxicity to be a fundamental aspect of human interaction in digital spaces, rather than being strongly influenced by the specific design, culture, or moderation policies of individual platforms. It suggests that there is a gender.
Despite the extensive data and robust analysis, this study has limitations. One of the main challenges is to distinguish between behavioral patterns that are unique to humans and those that are influenced by platform design and algorithmic structure. While the use of automated systems to detect toxicity is necessary to handle large datasets, it also introduces potential biases due to the complexity of natural language and the subtleties of human communication.
Future research should focus on improving toxicity detection techniques, understanding the triggers of toxic behavior, and exploring the role of platform algorithms in shaping these dynamics. Furthermore, investigating the effects of these interaction patterns in offline environments may provide deeper insight into the nature of the prevalence of toxicity in human interactions.
“An important caveat to our study is the limitations of directly comparing online behavior to offline behavior,” Quattrociocchi explained. “While the digital nature of data makes our understanding of online interactions more nuanced and data-driven, obtaining comprehensive and similar offline data is more complex. We are limited in our ability to fully explore how this differs in non-digital settings.”
“Our main long-term goal is to advance our understanding of human behavior on social media platforms, moving us beyond mere speculation to a solid, empirical understanding. We aim to systematically analyze how people behave online and why they behave the way they do, identifying triggers and contexts for harmful behaviors and positive interactions.
“Understanding the persistent nature of toxicity on social media allows users to engage more mindfully,” Quattrociocchi added. “We hope that our findings will inspire other researchers to explore innovative solutions and that platform developers will consider these insights when designing future iterations of the interface.”
The study, “Sustained interaction patterns across social media platforms and over time,” was written by Michele Avalle, Niccolò Di Marco, Gabriele Etta, Emanuele Sangiorgio, Shayan Alipour, Anita Bonetti, Lorenzo Alvisi, Antonio Scala, Andrea Baronchelli, Matteo Written by Cinelli. and Walter Quattrociocchi.