Why Social Media Amplifies Extreme Views – And How To Stop It


Peace-builder and Ashoka Fellow Helena Puig Larrauri co-founded Build Up to transform conflict in the digital age–in places from the U.S. to Iraq. With the exponential growth of viral polarizing content on social media, a key systemic question emerged for her: What if we made platforms pay for the harms they produce? What if we imagined a tax on polarization, akin to a carbon tax? A conversation about the root causes of online polarization, and why platforms should be held responsible for the negative externalities they cause.

Konstanze Frischen: Helena, does technology help or harm democracy?

Helena Puig Larrauri: It depends. There is great potential for digital technologies to include more people in peace processes and democratic processes. We work on conflict transformation in many regions across the globe, and technology can really help include more people. In Yemen, for instance, it can be very difficult to incorporate women’s viewpoints into the peace process. So we worked with the UN to use WhatsApp, a very simple technology, to reach out to women and have their voices heard, avoiding security and logistical challenges. That’s one example of the potential. On the flip side, digital technologies bring about immense challenges – from surveillance to manipulation. And here, our work is to understand how digital technologies are impacting conflict escalation, and what can be done to mitigate that.

Frischen: You have staff working in countries like Yemen, Kenya, Germany and the US. How does it show up when digital media escalates conflict?

Puig Larrauri: Here is an example: We worked with partners in northeast Iraq, analyzing how conversations happen on Facebook, and it quickly showed that what people said and how they positioned themselves had to do with how they spoke about their sectarian identity, whether they said they were Arabic or Kurdish. But what was happening at a deeper level is that users started to associate a person’s opinion with their identity – which means that in the end, what matters is not so much what is being said, but who is saying it: your own people, or other people. And it meant that the conversations on Facebook were extremely polarized. And not in a healthy way, but by identity. We all must be able to disagree on issues in a democratic process, in a peace process. But when identities or groups start opposing each other, that’s what we call affective polarization. And what that means is that no matter what you actually say, I’m going to disagree with you because of the group that you belong to. Or, the flip side, no matter what you say, I’m going to agree with you because of the group that you belong to. When a debate is at that state, then you’re in a situation where conflict is very likely to be destructive. And escalate to violence.

Frischen: Are you saying social media makes your work harder because it drives affective polarization?

Puig Larrauri: Yes, it certainly feels like the odds are stacked against our work. Offline, there may be space, but online, it often feels like there’s no way that we can start a peaceful conversation. I remember a conversation with the leader of our work in Africa, Caleb. He said to me during the recent election cycle in Kenya “when I walk the streets, I feel like this is going to be a peaceful election. But when I read social media, it’s a war zone.” I remember this because even for us, who are professionals in the space, it is unsettling.

Frischen: The standard way for platforms to react to hate speech is content moderation — detecting it, labeling it, depending on the jurisdiction, perhaps removing it. You say that’s not enough. Why?

Puig Larrauri: Content moderation helps in very specific situations – it helps with hate speech, which is in many ways the tip of the iceberg. But affective polarization is often expressed in other ways, for example through fear. Fear speech is not the same as hate speech. It can’t be so easily identified. It probably won’t violate the terms of service. Yet we know that fear speech can be used to incite violence. But it wouldn’t fall foul of the content moderation guidelines of platforms. That’s just one example, the point is that content moderation will only ever catch a small part of the content that is amplifying divisions. Maria Ressa, the Nobel Prize Winner and Filipino journalist, said that recently so well. She said something along the lines that the issue with content moderation is it’s like you fetch a cup of water from a polluted river, clean the water, but then put it back into the river. So I say we need to build a water filtration plant.

Frischen: Let’s talk about that – the root cause. What has that underlying architecture of social media platforms to do with the proliferation of polarization?

Puig Larrauri: There’s actually two reasons why polarization thrives on social media. One is that it invites people to manipulate others and to deploy harassment on mass. Troll armies, Cambridge Analytica – we’ve all heard these stories, let’s put that aside for a moment. The other aspect, which I think deserves a lot more attention, is the way in which social media algorithms are built: They’re looking to serve you up with content that is engaging. And we know that affective polarizing content, that positions groups against each other, is very emotive, and very engaging. As a result, the algorithms serve it up more. So what that means is that social media platforms provide incentives to produce content that is polarizing, because it will be more engaging, which is incentivizing people to produce more content like that, which makes it more engaging, and so on. It’s a vicious circle.

Frischen: So the spread of divisive content is almost a side effect of this business model that makes money off engaging content.

Puig Larrauri: Yes, that’s the way that social media platforms are designed at the moment: to engage people with content, any kind of content, we don’t care what that content is, unless it’s hate speech or something else that violates a narrow policy, right, in which case, we will take it down, but in general, what we want is more engagement on anything. And that is built into their business model. More engagement allows them to sell more ads, it allows them to collect more data. They want people to spend more time on the platform. So engagement is the key metric. It’s not the only metric, but it’s the key metric that algorithms are optimizing for.

Frischen: What framework could force social media companies to change this model?

Puig Larrauri: Great question, but to understand what I’m about to propose, let me say first that the main thing to understand is that social media is changing the way that we understand ourselves and other groups. It is creating divisions in society, and amplifying politically existing divisions. That’s the difference between focusing on hate speech, and focusing on this idea of polarization. Hate speech and harassment is about what the individual experience of being on social media is, which is very important. But when we think about polarization, we’re talking about the impact social media is having on society as a whole, regardless of whether I’m being personally harassed. I am still being impacted by the fact that I’m living in a more polarized society. It is a societal negative externality. There’s something that is affecting all of us, regardless of whether we are individually affected by something.

Frischen: Negative externality is an economics term that – I’m simplifying – describes that in a production or consumption process, there’s a cost being generated, a negative impact, which is not captured by the market mechanisms, and it is harming someone else.

Puig Larrauri: Yes, and the key here is that that cost is not included in the production costs. Let’s take air pollution. Traditionally, in industrial capitalism, people were producing things like cars and machines, in the process of which they also produced environmental pollution. But first, nobody had to pay for the pollution. It was as if that cost didn’t exist, even though it was actually a negative cost to society, but it just wasn’t being priced by the market. Something very similar is happening with social media platforms right now. Their profit model isn’t to create polarization, they just have an incentive to create content that is engaging, regardless of whether it’s polarizing or not, but polarization happens as a by-product, and there’s no incentive to clean it up, just like there was no incentive to clean up pollution. And that’s why polarization is a negative externality of this platform business model.

Frischen: And what are you proposing we do about that?

Puig Larrauri: Make social media companies pay for it. By bringing the societal pollution they cause into the market mechanism. That’s in effect what we did with environmental pollution – we said it should be taxed, there should be carbon taxes or some other mechanism like cap and trade that make companies pay for the negative externality they create. And for that to happen, we had to measure things like CO2 output, or carbon footprints. So my question is: Could we do something similar with polarization? Could we say that social media platforms or perhaps any platform that is driven by an algorithm should be taxed for their polarization footprint?

Frischen: Taxation of polarization is such a creative, novel way to think about forcing platforms to change their business model. I want to acknowledge there are others out there – in the U.S., there’s a discussion about the reform of section 230 that currently shields social media platforms from liability, and….

Puig Larrauri: Yes, and there’s also a very big debate, which I’m very supportive of, and part of, about how to design social media platforms differently by making algorithms optimize for something other than engagement, something that might be less polluting, and produce less polarization. That’s an incredibly important debate. The question I have, however, is how do we incentivize companies to actually take that on? How do we incentivize them to say, Yes, I’m going to make those changes, I’m not going to use this simple engagement metric anymore, I’m going to take on these design changes in the underlying architecture. And I think the way to do that is to essentially provide a financial disincentive to not doing it, which is why I’m so interested in this idea of a tax.

Frischen: How would you ensure taxing content is not seen as undermining protections of free speech? A big argument, especially in the U.S., where you can spread disinformation and hate speech under this umbrella.

Puig Larrauri: I don’t think that a polarization footprint necessarily needs to look at speech. It can look at metrics that have to do with the design of the platform. It can look at, for example, the connection between belonging to a group and only seeing certain types of content. So it doesn’t need to get into issues of hate speech or free speech and the debate around censorship that comes with that. It can look simply at design choices around engagement. As I said before, I actually don’t think that content moderation and censorship is what’s going to work particularly well to address polarization on platforms. What we now need to do is to set to work to measure this polarization footprint, and find the right metrics that can be applied across platforms.

For more follow Helena Puig and Build Up.





Source link: https://www.forbes.com/sites/ashoka/2023/05/04/why-social-media-amplifies-extreme-views–and-how-to-stop-it/

Sponsors

spot_img

Latest

Two centers, forward proposed by Bleacher Report as Boston Celtics trade targets

The 2023 NBA trade deadline is inching ever closer, producing a wave of projections about what the NBA’s 30 ball clubs might look...

Everyone Is ‘Quiet Quitting,’ But Engagement Is Up: Report

In a world still grappling with the aftermath of a global pandemic, the concept of work has...

‘The decision is wrong’ – Wolves boss Julen Lopetegui left fuming over Toti Gomes’ late disallowed strike against Liverpool as FA Cup third round...

Julen Lopetegui has slammed the decision to overturn Wolves’ late winner against Liverpool in their FA Cup third round clash. The two clubs couldn’t...

Paris Masters 2023 livestream: How to watch Rolex Paris Masters for free

TL;DR: Prime Video is hosting the 2023 Rolex Paris Masters. You can watch for free with a 30-day trial of Amazon Prime.If you...