Thursday, October 19, 2023

From NBCnews: Pro-Hamas extremists and neo-Nazis flood social media with calls for violence

a drawback of social media

- Click here for the article.

. . . the most common threat now comes from disconnected individuals radicalized online.

“They could stay in their mother’s basement and be on the internet all day and get themselves worked up and listen to the propaganda and be that person who would be influenced to do something in a small group or as a lone actor,” he said.

Extremist content has been growing online since Hamas attacked Israel on Oct. 7. An Oct. 12 bulletin from the Institute for Strategic Dialogue called the platform X “an epicenter for content praising the attacks perpetrated by the Hamas-linked al-Qassem Brigades, as well as Palestinian Islamic Jihad (PIJ), and Hezbollah.”

The analysis counted more than 16 million views of 120 pieces of branded terrorist content put out by the groups on the X platform. The content included GoPro footage of fighters desecrating the bodies of Israeli serviceman. Hashtags associated with the content had 1.98 million mentions, the bulletin said.

The Institute for Strategic Dialogue also found that U.S.-based neo-Nazis had appropriated the language of Hamas and have been using the conflict in an effort to inspire attacks in the U.S. and on Jewish communities globally. The neo-Nazi accounts are spreading official terrorist content linked to the Qassem Brigades on Telegram, while also linking to "Resistance Axis" groups on the platform.

Several members of extremist groups in the U.S. and Canada — including members of the Proud Boys — have posted on Telegram expressing a desire to kill or harm Jewish people, according to Advance Democracy, which studies far-right rhetoric online.

The European Union tries to fight back 

- E.U. demands Meta and TikTok detail efforts to curb disinformation from Israel-Hamas war.

The European Union ratcheted up its scrutiny of Big Tech companies on Thursday with demands for Meta and TikTok to detail their efforts to curb illegal content and disinformation during the Israel-Hamas war.

The European Commission, the 27-nation bloc’s executive branch, formally requested that the social media companies provide information on how they’re complying with sweeping new digital rules aimed at cleaning up online platforms.

The commission asked Meta and TikTok to explain the measures they have taken to reduce the risk of spreading and amplifying terrorist and violent content, hate speech and disinformation.

Under the E.U.’s new rules, which took effect in August, the biggest tech companies face extra obligations to stop a wide range of illegal content from flourishing on their platforms or face the threat of hefty fines.

The new rules, known as the Digital Services Act, are being put to the test by the Israel-Hamas war. Photos and videos have flooded social media of the carnage alongside posts from users pushing false claims and misrepresenting videos from other events.

Brussels issued its first formal request under the DSA last week to Elon Musk’s social media platform X, formerly known as Twitter.


- Click here for the Digital Services Act.

The DSA is meant to "govern the content moderation practices of social media platforms" and address illegal content. It is organized in five chapters, with the most important chapters regulating the liability exemption of intermediaries (Chapter 2), the obligations on intermediaries (Chapter 3), and the cooperation and enforcement framework between the commission and national authorities (Chapter 4).

The DSA proposal maintains the current rule according to which companies that host others' data become liable when informed that this data is illegal. This so-called "conditional liability exemption" is fundamentally different from the broad immunities given to intermediaries under the equivalent rule ("Section 230 CDA") in the United States.

In addition to the liability exemptions, the DSA would introduce a wide-ranging set of new obligations on platforms, including some that aim to disclose to regulators how their algorithms work, while other obligations would create transparency on how decisions to remove content are taken and on the way advertisers target users. The European Centre for Algorithmic Transparency was created to aid the enforcement of this.