blog

Israel-Hamas war misinformation on social media is harder to track

Researchers sifting through social media content about the Israel-Hamas conflict say it’s getting harder to verify information and track the spread of misleading material, adding to the digital fog of war.

As misinformation and violent content surrounding the war proliferate online, social media companies’ pullbacks in moderation and other policy shifts have made it “close to impossible” to do the work researchers were able to do less than a year ago, said Rebekah Tromble, the director of George Washington University’s Institute for Data, Democracy and Politics.

“It has become much more difficult for researchers to collect and analyze meaningful data to understand what’s actually happening on any of these platforms,” she said.

Follow live updates from NBC News here.

Much attention has focused on X, formerly known as Twitter, which has made significant changes since Elon Musk bought it for $44 billion late last year.

In the days after Hamas’ Oct. 7 attack, researchers flagged dozens of accounts pushing a coordinated disinformation campaign related to the war, and a separate report from the Technology Transparency Project found Hamas has used premium accounts on X to spread propaganda videos.

The latter issue comes after X began offering blue check marks to premium users for subscriptions starting at $8 a month, rather than applying the badge to those whose identities it had verified. That has made it harder to distinguish the accounts of journalists, public figures and institutions from those of potential impostors, experts say.

“One of the things that is touted for that [premium] service is that you get prioritized algorithmic ranking and searches,” Technology Transparency Project Director Katie Paul said. Hamas propaganda is getting the same treatment, she said, “which is making it even easier to find these videos that are also being monetized by the platform.”

X is far from the only major social media company coming under scrutiny during the conflict. Paul said that X used to be an industry leader in combating online misinformation but that in the past year it has spearheaded a movement toward a more hands-off approach.

“That leadership role has remained, but in the reverse direction,” she said, adding that the Hamas videos highlight what she described as platforms’ business incentives to embrace looser content moderation. “Companies have cut costs by laying off thousands of moderators, all while continuing to monetize harmful content that perpetuates on their platforms.”

Paul pointed to ads that ran alongside Facebook search results related to the 2022 Buffalo, New York, mass shooting video while it circulated online, as well as findings by the Technology Transparency Project and the Anti-Defamation League that YouTube previously auto-generated “art tracks,” or music with static images, for white power content that it monetized with ads.

A spokesperson for Meta, which owns Facebook and Instagram, declined to comment on the Buffalo incident. The company said at the time that it was committed to protecting users from encountering violent content. YouTube said in a statement it doesn’t want to profit from hate and has since “terminated several YouTube channels noted in ADL’s report.”

X responded with an automated message: “Busy now, please check back later.”

The deep cuts to “trust and safety” teams at many major platforms, which came in a broader wave of tech industry layoffs beginning late last year, drew warnings at the time about backsliding on efforts to police abusive content — especially during global crises.

We’re left being completely unclear what’s really happening on the ground.

Claire Wardle, co-director of Brown University's Information Futures Lab

Some social media companies have changed their moderation policies since then, researchers say, and existing rules are sometimes being enforced differently or unevenly.

“Today in conflict situations, information is one of the most important weapons,” said Claire Wardle, a co-director of the Information Futures Lab at Brown University. Many are now successfully pushing “false narratives to support their cause,” she said, but “we’re left being completely unclear what’s really happening on the ground.”

Experts are encountering more roadblocks to accessing social media platforms’ application programming interfaces, or APIs, which allow third parties to gather more detailed information from an app than what’s available from user-facing features.

Some major platforms, such as YouTube and Facebook, have long limited access to their APIs. Over the past year, Reddit joined X in reducing free use of its API, though it waives its charges for noncommercial research. The most basic access to X’s API now starts at $100 a month and can run up to $42,000 a month for enterprise use.

TikTok has taken steps in the other direction. It launched a research API this year in the U.S. as part of a transparency push after having fielded national security concerns from Western authorities over its Chinese parent company, ByteDance.

YouTube said it has already removed thousands of harmful videos and is “working around the clock” to “take action quickly” against abusive activity. Reddit said its safety teams are monitoring for policy violations during the war, including content posted by legally designated terrorist groups.

TikTok said it has added “resources to help prevent violent, hateful or misleading content on our platform” and is working with fact-checkers “to help assess the accuracy of content in this rapidly changing environment.”

“My biggest worry is the offline consequence,” said Nora Benavidez, the senior counsel and director of digital justice at the media watchdog Free Press. “Real people will suffer more because they are desperate for credible information quickly. They soak in what they see from platforms, and the platforms have largely abandoned and are in the process of abandoning their promises to keep their environments healthy.”

Real people will suffer more because they are desperate for credible information quickly. They soak in what they see.

Nora Benavidez, FRee Press senior counsel and director of digital justice

Another obstacle during the current conflict, Tromble said, is that Meta has allowed key tools such as CrowdTangle to degrade.

“Journalists and researchers, both in academia and civil society, used [CrowdTangle] extensively to study and understand the spread of mis- and disinformation and other sorts of problematic content,” Tromble said. “The team behind that tool is no longer at Meta, and its features aren’t being maintained, and it’s just becoming worse and worse to use.”

That change and others across social media mean “we simply don’t have nearly as much high-quality verifiable information to inform decision-making,” Tromble said. Where once researchers could sift through data in real time and “share that with law enforcement and executive agencies” relatively quickly, “that is effectively impossible now.”

The Meta spokesperson declined to comment on CrowdTangle but pointed to the company’s statement Friday that it is working to intercept and moderate misinformation and graphic content involving the Israel-Hamas war. The company, which has rolled out additional research tools this year, said it has “removed seven times as many pieces of content” for violating its policies compared with the two months preceding the Hamas attack.

Resources remain tight for examining how social media content affects the public, said Zeve Sanderson, the founding executive director at New York University’s Center for Social Media and Politics.

Source: https://www.nbcnews.com/tech/misinformation/israel-hamas-war-misinformation-social-media-harder-track-rcna120173


Related Posts