Daily Weekly Monthly

Weekly Shaarli

Week 26 (June 24, 2024)

Témoignages. Dans l’enfer des “nettoyeurs” des réseaux sociaux
thumbnail

Témoignages. Dans l’enfer des “nettoyeurs” des réseaux sociaux

Alors que les géants du numérique tentent de renforcer le contrôle sur leurs plateformes, les “modérateurs de contenu” sont exposés à d’innombrables posts violents ou haineux dans le cadre leur travail. Le quotidien japonais “Asahi Shimbun” est allé à leur rencontre.

Publié le 27 juin 2024 à 05h00 Shiori Tabuchi, Azusa Ushio

Ces vidéos prolifèrent sur la Toile. Violences, menaces, actes sexuels… Pourtant, ils n’ont que deux ou trois minutes pour décider de les supprimer ou non.

Nous sommes dans un immeuble, dans une ville d’Asie du Sud-Est. Dans une salle, assis en silence devant leur ordinateur, casque sur les oreilles, des modérateurs de contenu, surnommés “nettoyeurs des réseaux sociaux”, suppriment les publications Internet jugées inappropriées.

Parmi eux, un Japonais, employé par un sous-traitant d’un géant du numérique qui exploite un site de partage de vidéos, a accepté de répondre à nos questions, à condition de ne divulguer ni son nom, ni son âge :

“On m’interdit de parler en détail du contenu de mon travail.”

Il travaille en trois-huit avec des équipes constituées par langue pour un salaire mensuel d’environ 200 000 yens [1 200 euros]. Soumis à une stricte confidentialité, il n’a pas le droit d’apporter son smartphone dans la salle, ni même un simple stylo.

Lorsqu’il arrive à son poste, il allume ses deux écrans. Sur l’un d’eux, une vidéo passe en vitesse rapide. L’autre affiche les nombreuses règles de modération à appliquer, un document qui semble faire un millier de pages. Lorsqu’il repère un contenu proscrit, il classe la vidéo dans une catégorie, par exemple “violence”, “porno”, “harcèlement” ou “haine”. Et cherche la règle qu’elle enfreint et copie cette dernière dans le champ des commentaires. “La chose essentielle est de la trouver aussi vite que possible”, explique-t-il.

Lorsqu’il a fini de vérifier une vidéo, la suivante apparaît. Outre les contenus signalés par des utilisateurs, “il y a probablement des publications détectées automatiquement par l’intelligence artificielle (IA), mais je ne sais pas comment elles sont choisies”.

Jeu du chat et de la souris

Si une vidéo montre une personne battue jusqu’au sang ou contient des menaces du genre “Je vais le tuer”, il la supprime immédiatement. En cas de doute, il envoie la vidéo à un service spécialisé. Sur les quelque 80 vidéos qu’il visionne par jour, il en supprime environ trois. Il y en a également une dizaine qu’il trouve difficiles à juger. Il ignore combien il y a de services au total, et qui prend les décisions en définitive. “Je procède de manière mécanique”, confie-t-il.

Il se souvient d’un pic d’activité après l’assassinat par balle de l’ancien Premier ministre Shinzo Abe [en juillet 2022]. Des images de la scène ont été publiées à de nombreuses reprises. “J’effaçais les vidéos non floutées pratiquement les unes après les autres.”

Les règles de modération sont nombreuses et détaillées, et les changements sont annoncés chaque semaine lors de réunions matinales. Est également fournie une base de données rassemblant les mots tabous. À la fin de chaque journée de travail, les modérateurs passent un test visant à évaluer leur connaissance des dernières règles : ceux qui obtiennent un faible score voient leur salaire réduit.

Les vidéos supprimées sont fréquemment republiées, et certains contenus passent entre les mailles du filet. Notre modérateur est conscient des critiques :

“Nous faisons de notre mieux, mais c’est comme le jeu du chat et de la souris. Nous ne pouvons pas effacer toutes les vidéos. Celles qui ne sont pas signalées restent.”

Le géant du numérique qui assure ce service de modération soutenait autrefois qu’il ne faisait que fournir un “lieu” d’expression et n’était pas responsable des contenus publiés. Mais la prolifération des publications nuisibles l’a contraint à réagir et à renforcer sa surveillance.

Le règlement sur les services numériques (Digital Services Act, DSA), adopté par l’Union européenne (UE), oblige aujourd’hui les grandes plateformes Internet à supprimer les publications nuisibles, notamment les contenus discriminatoires et les fausses informations. Si beaucoup sont supprimées automatiquement par l’IA, certaines nécessitent une intervention humaine. Selon les rapports que la Commission européenne a demandé aux géants du numériques de présenter en octobre dernier, Facebook a supprimé en Europe près de 47 millions de contenus contrevenant à la réglementation au cours des cinq mois qui ont suivi la fin avril 2023. Et 2,83 millions d’entre eux, soit 6 %, ont été supprimés par des modérateurs.

“Soldats des réseaux”

Facebook emploie environ 15 000 modérateurs et X environ 2 300. TikTok en compte environ 40 000, chargés notamment de contrôler les vidéos populaires qui dépassent un certain nombre de vues et de supprimer celles qui posent problème.

“Les modérateurs sont les soldats qui œuvrent dans l’ombre des réseaux sociaux”, estime Kauna Malgwi, 30 ans, qui vit aujourd’hui à Abuja, la capitale du Nigeria. Il y a cinq ans, alors qu’elle était une mère célibataire en situation précaire, elle est partie étudier au Kenya. Elle y a accepté ce qui était présenté comme un “poste d’interprète dans un ‘service clientèle’ utilisant le haoussa”, l’une des langues qui comptent le plus grand nombre de locuteurs en Afrique de l’Ouest. En réalité, elle s’est retrouvée modératrice pour Meta, qui exploite Facebook et Instagram. En parallèle à ses études de troisième cycle, pendant environ quatre ans, jusqu’en mars 2023, elle a travaillé neuf heures par jour, cinq jours par semaine, pour la succursale kenyane d’un sous-traitant du géant du numérique américain.

Expérience traumatisante

La première vidéo qu’elle a visionnée montrait un homme chutant du 15e étage d’un immeuble. Devant l’effroyable spectacle du corps s’écrasant au sol, elle a sauté de sa chaise. Elle devait remplir un questionnaire pyramidal énonçant les motifs de suppression du haut vers le bas. Après avoir répondu par la négative à la première question – “Voit-on des corps nus ?” –, elle a coché les cases “Voit-on des viscères ?” et “Voit-on du sang ?”.

Agressions sexuelles sur des enfants en bas âge, exécutions par des groupes extrémistes, suicides par balle… Chaque jour, elle examinait un millier de vidéos, détectées par l’IA ou signalées par des utilisateurs, et avait un maximum de cinquante-cinq secondes par vidéo pour décider de leur suppression ou non.

Elle supprimait également des textes à caractère raciste et d’autres messages de haine contenant des mots spécifiques.

“Il n’y avait pas que les textes. Par exemple, un dessin représentant un Asiatique et un singe côte à côte avec la légende ‘deux frères’ devait être supprimé.”

Elle a même supprimé des contenus publiés en Asie du Sud-Est, à plusieurs milliers de kilomètres de là.

Elle gagnait 60 000 shillings kényans (environ 400 euros) par mois, ce qui correspond au revenu mensuel moyen au Kenya. Mais elle souffrait à la fois d’insomnie et de trouble panique, ce qui l’a conduite plusieurs fois à l’hôpital.

Les accords de confidentialité ne lui ont même pas permis de se confier à sa famille. Ses collègues, les seuls avec lesquels elle pouvait partager ses sentiments, fumaient du cannabis pendant leurs pauses pour échapper à la réalité. Certains ont même avoué envisager le suicide. “C’est certes un travail important de protéger les nombreux utilisateurs de ces institutions que sont devenus les réseaux sociaux, mais quand même…” Aujourd’hui encore, il lui arrive de pleurer en repensant aux images qu’elle a vues.

EU Council has withdrawn the vote on Chat Control
thumbnail

EU Council has withdrawn the vote on Chat Control

By Alex Ivanovs Published 20/06/2024

The EU Council and its participants have decided to withdraw the vote on the contentious Chat Control plan proposed by Belgium, the current EU President.

According to Netzpolitik (German), “The EU Council did not make a decision on chat control today, as the agenda item was removed due to the lack of a majority, confirmed by Council and member state spokespersons”.

Belgium’s draft law, which was supposed to be adopted as the Council’s negotiating position, was instead postponed indefinitely. Although the Committee of Permanent Representatives meets weekly, Belgium cannot currently present a proposal that would gain a majority. In July, the Council Presidency will transfer from Belgium to Hungary, which has stated its intention to advance negotiations on chat control as part of its work program.

At the start of 2022, the European Commission proposed monitoring all chat messages and other forms of digital communication among citizens. This initiative includes client-side scanning for end-to-end encrypted services, meaning all messages would be checked irrespective of suspicion.

The plan targets the detection of both known and unknown abusive material and grooming activities. Experts have cautioned that such measures are prone to generating numerous false positives, particularly when identifying unknown content, leading to innocent citizens being misidentified as senders of abusive material.

European legislation is formed through a trialogue process involving negotiations between the European Commission, the European Parliament, and the Council of Ministers. Initially, the European Parliament rejected the European Commission’s proposal and introduced its own, which, while still critical, excluded end-to-end encrypted services. However, Belgium’s new proposal reintroduced client-side scanning for these services, stipulating that users must consent to chat controls; otherwise, they would lose the ability to send photos, videos, and URLs.

This method, termed “upload moderation” by Belgium, has been criticized by opponents as merely a rebranding of the original concept.

Signal and other apps threaten to leave the EU if the proposal is enacted as law

Meredith Whittaker, president of the chat app Signal, has been vocal against these plans. She argues that implementing such measures within end-to-end encrypted communications fundamentally undermines encryption and introduces significant vulnerabilities in the digital infrastructure.

Whittaker emphasizes that these vulnerabilities have far-reaching global implications, not just within Europe. She has repeatedly highlighted the issue, stating, “There is no way to implement such proposals without fundamentally undermining encryption and introducing dangerous vulnerabilities.”

On June 17, Whittaker published an official position condemning the EU’s proposed “upload moderation” as a rebranding of client-side scanning that fundamentally undermines end-to-end encryption.

She emphasized that despite attempts to mask the dangers through marketing, these measures expose encrypted communications to mass surveillance, creating vulnerabilities exploitable by hackers and hostile nations. Whittaker urged a cessation of such rhetorical games, reiterating that any form of mandated mass scanning compromises encryption, thereby threatening global security and privacy at a critically unstable geopolitical moment.

The privacy messenger Threema published a blog post saying the EU’s proposed Chat Control bill represents a dangerous mass surveillance initiative that would undermine data security, violate privacy rights, and negatively impact professionals and minors.

Patrick Breyer, the outgoing MEP from the Pirate Party, raised concerns, noting that proponents of chat control have leveraged the period following the European elections, when attention is lower and the European Parliament is in transition, to advance their agenda. Breyer has called on European citizens to take action and urge their politicians to oppose the measures.

Edward Snowden, the NSA whistleblower, criticized the proposal, stating, “EU apparatchiks are trying to legislate a terrible mass surveillance measure, despite universal public opposition (no sane person wants this), by inventing a new word for it – upload moderation – and hoping no one finds out what it is until it’s too late.”

What happens next?

With the EU Council withdrawing the vote on the Chat Control proposal today, the legislative process faces new uncertainty. The proposal will return to the drawing board, as the European Commission[1] and the European Parliament continue to deliberate on the best way forward.

The discussions will resume after the summer, once the new Parliament is seated and Hungary assumes the Council presidency from Belgium in July. Hungary has already committed to developing a comprehensive legislative framework to prevent and combat online child sexual abuse and revising the directive against the sexual exploitation of children.

The forthcoming negotiations are anticipated to be highly contentious, especially since the European Parliament has firmly opposed any measures that would circumvent end-to-end encryption. The Member States and the Parliament have until April 2026 to agree. This deadline is crucial, as an existing exemption allowing social networks to self-moderate content will expire, potentially eliminating current safeguards against sharing sensitive images.

In the meantime, privacy advocates and digital rights organizations will likely continue to voice their concerns, urging EU citizens to remain vigilant and engaged in the debate over digital privacy and surveillance. The next steps will involve intense negotiations and potential revisions to address the complex issues at stake.

[footnote #1]: On June 20, at the European Data Protection Supervisor (EDPS) 20th anniversary summit, EU Commissioner for Justice Vera Jourová stated that the European Commission’s proposal for the Child Sexual Abuse Regulation (CSAR) would break encryption. This marks the first time the European Commission has publicly acknowledged that the CSAR proposal would compromise encryption, a significant departure from the stance maintained over the past three years by Home Affairs Commissioner Ylva Johansson, who consistently claimed that the proposal would not affect encryption.

Deluge of ‘pink slime’ websites threaten to drown out truth with fake news in US election | US elections 2024 | The Guardian
thumbnail

Deluge of ‘pink slime’ websites threaten to drown out truth with fake news in US election

US sites pushing misinformation are proliferating, aiming to look like reliable sources as local newspapers close down

Eric Berger Thu 20 Jun 2024 12.00 CEST

Political groups on the right and left are using fake news websites designed to look like reliable sources of information to fill the void left by the demise of local newspapers, raising fears of the impact that they might have during the United States’ bitterly fought 2024 election.

Some media experts are concerned that the so-called pink slime websites, often funded domestically, could prove at least as harmful to political discourse and voters’ faith in media and democracy as foreign disinformation efforts in the 2016 and 2020 presidential elections.

According to a recent report from NewsGuard, a company that aims to counter misinformation by studying and rating news websites, the websites are so prolific that “the odds are now better than 50-50 that if you see a news website purporting to cover local news, it’s fake.”

NewsGuard estimates that there are a staggering 1,265 such fake local news websites in the US – 4% more than the websites of 1,213 daily newspapers left operating in the country.

“Actors on both sides of the political spectrum” feel “that what they are doing isn’t bad because all media is really biased against their side or that that they know actors on the other side are using these tactics and so they feel they need to,” said Matt Skibinski, general manager of NewsGuard, which determined that such sites now outnumber legitimate local news organizations. “It’s definitely contributed to partisanship and the erosion of trust in media; it’s also a symptom of those things.”

Pink slime websites, named after a meat byproduct, started at least as early as 2004 when Brian Timpone, a former television reporter who described himself as a “biased guy” and a Republican, started funding websites featuring names of cities, towns and regions like the Philly Leader and the South Alabama Times.

Timpone’s company, Metric Media, now operates more than 1,000 such websites and his private equity company receives funding from conservative political action committees, according to NewsGuard.

The Leader recently ran a story with the headline, “Rep Evans votes to count illegal aliens towards seats in Congress.”

In actuality, Representative Dwight Evans, a Democrat, did not vote to start counting undocumented immigrants in the 2030 census but rather against legislation that would have changed the way the country has conducted apportionment since 1790.

That sort of story is “standard practice for these outlets”, according to Tim Franklin, who leads Northwestern University’s Local News Initiative, which researches the industry.

“They will take something that maybe has just a morsel of truth to it and then twist it with their own partisan or ideological spin,” Franklin said. “They also tend to do it on issues like immigration or hot-button topics that they think will elicit an emotional response.”

A story published this month on the NW Arkansas News site had a headline on the front page that reported that the unemployment rate in 2021 in Madison county was 5.1% – even though there is much more recent data available. In April 2024, the local unemployment rate was 2.5%.

“Another tactic that we have seen across many of this category of sites is taking a news story that happened at some point and presenting it as if it just happened now, in a way that is misleading,” Skibinski said.

The left has also created websites designed to look like legitimate news organizations but actually shaped by Democratic supporters.

The liberal Courier Newsroom network operates websites in Arizona, Florida, Iowa, Michigan and Nevada, among other states, that – like the conservative pink slime sites – have innocuous sounding names like the Copper Courier and Up North News. The Courier has runs stories like “Gov Ducey Is Now the Most Unpopular Governor in America,” referring to Doug Ducy, the former Republican Arizona governor.

“In contrast, coverage of Democrats, including US President Joe Biden, Democratic Arizona Gov Katie Hobbs, and US Sen Mark Kelly of Arizona, is nearly always laudatory,” NewsGuard stated in a report about Courier coverage.

Tara McGowan, a Democratic strategist who founded the Courier Newsroom has received funding from liberal donors like Reid Hoffman and George Soros, as well as groups associated with political action committees, according to NewsGuard.

“There are pink slime operations on both the right and the left. To me, the key is disclosure and transparency about ownership,” said Franklin.

In a statement, a spokesperson for the Courier said comparisons between its operations and rightwing pink slime groups were unfair and criticized NewsGuard’s methodology in comparing the two.

“Courier publishes award-winning, factual local news by talented journalists who live in the communities we cover, and our reporting is often cited by legacy media outlets. This is in stark contrast to the pink slime networks that pretend to have a local presence but crank out low-quality fake news with no bylines and no accountability. Courier is proudly transparent about our pro-democracy values, and we carry on the respected American tradition of advocacy journalism,” the spokesperson said.

While both the left and the right have invested in the pink slime websites, there are differences in the owners’ approaches, according to Skibinski.

The right-wing networks have created more sites “that are probably getting less attention per site, and on the left, there is a smaller number of sites, but they are more strategic about getting attention to those sites on Facebook and elsewhere”, Skibinski said. “I don’t know that we can quantify whether one is more impactful than the other.”

Artificial intelligence could also help site operators quickly generate stories and create fake images.

“The technology underlying artificial intelligence is now becoming more accessible to malign actors,” said Kathleen Hall Jamieson, a University of Pennsylvania communications professor and director of the Annenberg Public Policy Center, which publishes Factcheck.org. “The capacity to create false images is very high, but also there is a capacity to detect the images that is emerging very rapidly. The question is, will it emerge rapidly with enough capacity?”

Still, it’s not clear whether these websites are effective. Stanford University reported in a 2023 study that engagement with pink slime websites was “relatively low” and little evidence that living “in a news desert made people more likely to consume pink slime”.

The Philly Leader and the NW Arkansas News both only have links to Facebook accounts on their websites and have less than 450 followers on each. Meanwhile, the Copper Courier and Up North News have accounts on all the major platforms and a total of about 150,000 followers on Facebook.

Franklin said he thinks that a lot of people don’t actually click links on social media posts to visit the website.

“The goal of some of these operators is not to get traffic directly to their site, but it’s to go viral on social media,” he said.

Republican lawmakers and leaders of the conservative news sites the Daily Wire and the Federalist have also filed a lawsuit and launched investigations accusing NewsGuard of helping the federal government censor right-leaning media. The defense department hired the company strictly to counter “disinformation efforts by Russian, Chinese and Iranian government-linked operations targeting Americans and our allies”, Gordon Crovitz, the former Wall Street Journal publisher who co-founded NewsGuard, told the Hill in response to a House oversight committee investigation. “We look forward to clarifying the misunderstanding by the committee about our work for the Defense Department.”

To counter the flood of misinformation, social media companies must take a more active role in monitoring such content, according to Franklin and Skibinski.

“The biggest solution to this kind of site would be for the social media platforms to take more responsibility in terms of showing context to the user about sources that could be their own context. It could be data from third parties, like what we do,” said Skibinski.

Franklin would like to see a national media literacy campaign. States around the country have passed laws requiring such education in schools.

Franklin also hopes that legitimate local news could rebound. The MacArthur Foundation and other donors last year pledged $500m to help local outlets.

“I actually have more optimism now than I had a few years ago,” Franklin said. “We’re in the midst of historic changes in how people consume news and how it’s produced and how it’s distributed and how it’s paid for, but I think there’s still demand for local news, and that’s kind of where it all starts.”