Social Media

Violence Foreshadowed in Social Media

Police officers guard the Tree of Life synagogue following a shooting at the synagogue in Pittsburgh, Pennsylvania, U.S., October 27, 2018.

Two acts of hatred in recent days have again exposed the way social-media services can be platforms for dangerous people to disseminate threats and intolerance that publicly foreshadow their violence.

Before suspected gunman Robert Bowers allegedly opened fire at a Pittsburgh synagogue on Saturday, killing 11, he posted anti-Semitic and holocaust-denying messages on Gab.com, a social-media site popular on the alt-right.

And before Cesar Altieri Sayoc, the suspect in a spate of attempted bombings, allegedly mailed explosive material to prominent democrats over the past week, he sent threatening messages on Twitter to a political analyst. She reported the tweet to the company, which left the message up—a decision it apologized for this weekend.

The episodes show how the perpetrators of mass acts of violence often are quite open in expressing their hatred, and sometimes their intentions, on internet platforms. That raises questions about the platforms’ responsibility for detecting and acting on such hate speech before they escalate to violence.

“We lead more and more of our lives online, so you have more and more of these digital crumbs that reveal what we are both thinking and capable of doing, whether it is our dating intentions to our voting intentions to our buying intentions to our violent intentions,” said P.W. Singer, co-author of the book “LikeWar: The Weaponization of Social Media” and senior fellow at New America, a think tank in Washington, D.C. He said sites need to do more to combat far-right extremism like that apparently behind the most recent incidents.

Gab launched in 2016, drawing many alt-right users, including neo-Nazis and other white supremacists, who were upset by efforts by Twitter and other social platforms to clamp down on hate speech and other abuse. Gab in 2017 said it had more than 225,000 users, and calls itself the “social network for creators who believe in free speech.”

Members and supporters of the Jewish community come together for a candlelight vigil, in remembrance of those who died earlier in the day during a shooting at the Tree of Life Synagogue in the Squirrel Hill neighborhood of Pittsburgh, in front of the White House in Washington, DC on October 27, 2018.
Members and supporters of the Jewish community come together for a candlelight vigil, in remembrance of those who died earlier in the day during a shooting at the Tree of Life Synagogue in the Squirrel Hill neighborhood of Pittsburgh, in front of the White House in Washington, DC on October 27, 2018. Photo: andrew caballero-reynolds/Agence France-Presse/Getty Images

Mr. Bowers appears to have used the site to spew anti-Semitic hatred openly. “Jews are waging a propaganda war against Western civilization and it is so effective that we are headed towards certain extinction within the next 200 years,” reads one message on Gab that Mr. Bowers reposted.

The account also posted an image of the entrance to the Auschwitz extermination camp where Jews were murdered during the Holocaust. The image was manipulated so that the entrance said “lies make money.”

Gab said in a statement Saturday that it contacted the Federal Bureau of Investigation immediately after discovering an account linked to Mr. Bowers.What’s News

A digest of the day’s most important news to watch, delivered to your inbox.

“Gab unequivocally disavows and condemns all acts of terrorism and violence,” the company said. It said it prohibits calling for acts of violence against others and threatening language that “clearly, directly and incontrovertibly infringes on the safety of another user or individual.”

Other companies signaled that Gab didn’t do enough to govern its content. Payments firm PayPal Holdings Inc. said on Saturday that it had canceled Gab’s account, and was in the process of booting Gab before the shooting on Saturday. “When a site is allowing the perpetuation of hate, violence or discriminatory intolerance, we take immediate and decisive action,” a PayPal spokesman said.

Gab had run on Microsoft ’s Azure cloud-computing service, but a Microsoft spokesman said the companies agreed to end that deal last month after Microsoft received complaints about Gab.

As of late Saturday, Gab’s site was down. The company couldn’t immediately be reached for comment.

In the case of Mr. Sayoc, a Twitter account that is believed to belong to him earlier this month tweeted at a political analyst after she appeared on Fox News. The message told the analyst to “hug your loved ones real close every time you leave you home” (sic). The analyst said that she had reported his messages to Twitter, although Twitter told her at the time that the post didn’t violate the company’s rules.

Twitter said in a statement late Friday that it had made a mistake. “The Tweet clearly violated our rules and should have been removed. We are deeply sorry for that error,” Twitter said in the statement.

Social-media companies have struggled to come up with rules that catch harmful behavior, and to enforce their own standards consistently.

Twitter has attempted to crack down on abuse on its platform and is working on a policy to address dehumanizing language on its site that can have repercussions off its site, such as normalizing violence.

Often the task of moderating content on these sites is left to algorithms that aren’t yet up to the task, and contract workers in foreign countries who sometimes don’t understand the cultural context behind messages that could be perceived as threatening.

[“source=Wall Street Journal“]