有几个人为我们其他人毁了互联网吗?
Are a few people ruining the internet for the rest of us?

原始链接: https://www.theguardian.com/books/2025/jul/13/are-a-few-people-ruining-the-internet-for-the-rest-of-us

社交媒体提出了一个扭曲的现实,这是由一群主导在线话语的小型活跃的用户群体放大的。研究表明,一小部分用户产生了绝大多数的极端内容,从政治推文到错误信息,产生了对广泛的两极分化和愤怒的错误看法。旨在通过优先考虑分裂内容的优先级,激励甚至温和的用户采取更多极端立场来吸引关注的算法进一步加剧了问题。这种偏见的公众舆论代表燃料“多元化无知”,个人误认为他人的信念,导致行为改变。作者提倡认识到这种幻想,策划社交媒体饲料,并抵制愤怒诱饵以收回控制。作者还建议平台可以重新设计其算法,以优先考虑更多代表性或细微差别的内容。

这个黑客新闻线程讨论了一些不好的演员是否毁了互联网,引用了有关该主题的《卫报》文章。评论者在很大程度上同意存在毒性,但辩论其原因和程度。 有些人指出,有恶意意图和算法放大的“小少数”是问题,而其他人则认为在线匿名养育了地下“政治上不正确”的。 Usenet的下降是作为一个曾经有价值的在线空间被垃圾邮件和陷入困境的人所取代的一个例子。 建议各种解决方案,包括更好的审核工具,个人责任,甚至从广告驱动的模型中转移。但是,一些用户认为这个问题是系统性的,并且与人性和社会两极分化中的固有缺陷有关。许多人认为《卫报》的文章忽略了复杂性,并淡化了互联网的积极方面。甚至有些人甚至发现对算法和不良行为者的关注是避免讨论人口中的基本划分的一种方式。
相关文章

原文

When I scroll through social media, I often leave demoralized, with the sense that the entire world is on fire and people are inflamed with hatred towards one another. Yet, when I step outside into the streets of New York City to grab a coffee or meet a friend for lunch, it feels downright tranquil. The contrast between the online world and my daily reality has only gotten more jarring.

Since my own work is focused on topics such as intergroup conflict, misinformation, technology and climate change, I’m aware of the many challenges facing humanity. Yet, it seems striking that people online seem to be just as furious about the finale of The White Lotus or the latest scandal involving a YouTuber. Everything is either the best thing ever or the absolute worst, no matter how trivial. Is that really what most of us are feeling? No, as it turns out. Our latest research suggests that what we’re seeing online is a warped image created by a very small group of highly active users.

In a paper I recently published with Claire Robertson and Kareena del Rosario, we found extensive evidence that social media is less like a neutral reflection of society and more like a funhouse mirror. It amplifies the loudest and most extreme voices while muting the moderate, the nuanced and the boringly reasonable. And much of that distortion, it turns out, can be traced back to a handful of hyperactive online voices. Just 10% of users produce roughly 97% of political tweets.

Let’s take Elon Musk’s own platform, X, as an example. Despite being home to hundreds of millions of users, a tiny fraction of them generate the vast majority of political content. For instance, Musk posted 1,494 times in his first 15 days of implementing government cuts for the so-called department of government efficiency (Doge)earlier this year. He was, essentially, writing non-stop. And many of his posts spread misinformation to his 221 million followers.

On 2 February he wrote, “Did you know that USAID, using YOUR tax dollars, funded bioweapon research, including Covid-19, that killed millions of people?” His behaviour fits the pattern of many misinformation super-spreaders. A mere 0.1% of users share 80% of fake news. Twelve accounts – known as the “disinformation dozen” – created most of the vaccine misinformation on Facebook during the pandemic. These few hyperactive users produced enough content to create the false perceptions that many people were vaccine hesitant.

Similar patterns can be observed across the internet. Only a small percentage of users engage in truly toxic behaviour, but they’re responsible for a disproportionate share of hostile or misleading content on nearly every platform, from Facebook to Reddit. Most people aren’t posting, arguing, or fuelling the outrage machine. But because the super-users are so active and visible, they dominate our collective impression of the internet.

That means the resulting problems don’t remain confined to this small cohort, which distorts how the rest of us make sense of the world. Humans create mental models about what other people think or do. It’s how we figure out social norms and navigate groups. But on social media, this shortcut backfires. We don’t get a representative sample of opinions. Instead, we see a flood of extreme, emotionally charged content.

In this way, many of us are led to believe that society is far more polarized, angry, and deluded than it really is. We think everyone on the other side of the generation gap, political spectrum, or fandom community is radical, malicious, or just plain dumb. Our information diet is shaped by a sliver of humanity whose job, identity, or obsession is to post constantly.

This distortion fuels pluralistic ignorance – when we misperceive what others believe or do – and can shift our own behaviour accordingly. Think of voters who see only the angriest hot takes about immigration or climate change and assume there’s no common ground to be found.

The problem isn’t just the individual extremists, of course – it’s the platform design and algorithms that amplify their content. These algorithms are built to maximise engagement, which means they privilege content that is surprising or divisive. The system is optimised to promote the very users who are most likely to distort our shared perception of reality.

It gets worse. Imagine you’re sitting in a busy restaurant, having to speak a little louder just to be heard. Before long, everyone is shouting. These same dynamics happen online. People exaggerate their beliefs or repeat outrageous narratives to get attention and approval. In other words, even people who aren’t especially extreme may start acting that way online, because it gets rewarded.

Most of us aren’t spending time on our phones trolling our foes. We’re busy working, raising families, spending time with friends, or simply trying to find some harmless entertainment on the internet. Yet, our voices are drowned out. We have effectively handed over a megaphone to the most obnoxious people and let them tell us what to believe and how to act.

With over 5 billion people now on social media, this technology isn’t going away. But the toxic dynamic I’ve described doesn’t have to hold sway. The first step is to see through the illusion and understand that a silent majority often lurks behind each incendiary thread. And we, as users, can take back some control – by curating our feeds, resisting the outrage bait, and refusing to amplify the nonsense. Think of it like deciding to follow a healthier, less processed diet.

In a recent series of experiments, we paid people a few dollars to unfollow the most divisive political accounts on X. After a month, they reported feeling 23% less animosity towards other political groups. In fact, their experience was so positive that nearly half the people declined to refollow those hostile accounts after the study was over. And those who maintain their healthier newsfeed reported less animosity a full 11 months after the study.

Platforms could easily redesign their algorithms to stop promoting the most outrageous voices and prioritise more representative or nuanced content. Indeed, this is what most people want. The internet is a powerful, and often valuable tool. But if we keep letting it reflect only the funhouse mirror world created by the most extreme users, we’ll all suffer the consequences.

Jay Van Bavel is a professor of psychology at New York University.

The Righteous Mind by Jonathan Haidt (Penguin, £12.99)

Going Mainstream by Julia Ebner (Ithaka, £10.99)

The Chaos Machine by Max Fisher (Quercus, £12.99)

联系我们 contact @ memedata.com