tech

As tech faces a reckoning, what you do offline can get you banned

16 Comments
By Elizabeth Culliford

Earlier this month, Twitch announced it would start banning users for behavior away from its site.

The move by Amazon Inc's live-streaming platform involved hiring a law firm to conduct investigations into users' misconduct, a new twist in the latest prominent example of tech companies acting on "off-service" behavior.

How platforms enforce against activities conducted not just on their services but on other sites and offline, is often only described vaguely in their rules. But as lawmakers and researchers examine tech's relationship with real-world violence or harm, this moderation is gaining attention.

While some groups have praised platforms for being proactive in protecting users, others criticize them for infringing on civil liberties.

"This isn't content moderation, this is conduct moderation," said Corynne McSherry, legal director at the digital rights group Electronic Frontier Foundation, who said she was concerned about platforms that struggle to effectively moderate content on their own sites extending their reach.

In interviews, platform policy chiefs described how they drew different lines around off-service actions that could impact their sites, acknowledging a minefield of challenges.

"Our team is looking across the web at a number of different platforms and channels where we know that our creators have a presence...to understand as best as possible the activities that they're engaging in there," said Laurent Crenshaw, policy head at Patreon, a site where fans pay subscriptions for creators' content.

Facebook Inc's rules ban users they deem dangerous, including those involved in terrorist activity, organized hate or criminal groups, convicted sex offenders and mass murderers. People who have murdered one person are mostly allowed, a spokeswoman said, due to the crime's volume. Last year, Facebook expanded the list to include "militarized social movements" and "violence-inducing conspiracy networks" like QAnon.

Twitch's new rules say it may ban users for "deliberately acting as an accomplice to non-consensual sexual activities" or actions that would "directly and explicitly compromise the physical safety of the Twitch community," categories which a spokeswoman said were intentionally broad.

Twitch's change in policy largely stemmed from the gaming industry's #MeToo moment in summer 2020 when the site saw harassment at real-life gaming events and on sites like Twitter and Discord, Chief Operating Officer Sara Clemens told Reuters.

Looking beyond their own sites has helped companies remove extremists and others who have "learned the hairline cracks" in site rules to stay online, said Dave Sifry, vice president of the Anti-Defamation League's Center for Technology and Society, which has pushed for major platforms to incorporate this behavior into decisions.

Self-publishing site Medium established off-service behavior rules in 2018, after realizing attendees of the August 2017 white nationalist rally in Charlottesville who had not broken rules on specific sites appeared to be "bad actors on the internet in general," it said.

Last summer's protests over the murder of George Floyd prompted Snap Inc to talk publicly about off-platform rules: CEO Evan Spiegel announced Snapchat would not promote accounts of people who incite racial violence, including off the app. In December 2020, TikTok updated its community guidelines to say it would use information available on other sites and offline in its decisions, a change that a spokeswoman said helped it act against militia groups and violent extremists.

Notably, this year, sites like Facebook, Twitter Inc and Twitch took into account former U.S. President Donald Trump's off-service actions that led to his supporters storming the U.S. Capitol on Jan 6 when they banned him.

FROM MURDER TO MONEY LAUNDERING

Tech companies differ in approaches to off-platform behavior and how they apply their rules can be opaque and inconsistent, say researchers and rights groups.

Twitter, a site where white nationalists like Richard Spencer continue to operate, focuses its off-service rules on violent organizations, global director of public policy strategy and development Nick Pickles said in an interview.

Other platforms described specific red-flag activities: Pinterest, which took a hardline approach to health misinformation, might remove someone who spreads false claims outside the platform, policy head Sarah Bromma said. Patreon's Crenshaw said while the subscription site wanted to support rehabilitated offenders, it might prohibit or have restrictions around convicted money launderers or embezzlers using its platform to raise money.

Sites also diverge on whether to ban users solely for off-service activity or if on-site content has to be linked to the offense.

Alphabet Inc's YouTube says it requires users' content to be closely linked to a real-world offense, but it may remove users' ability to make money from their channel based on off-service behavior. It recently did this to beauty influencer James Charles for allegedly sending sexually explicit messages to minors.

Charles' representatives did not respond to requests for comment. In a statement posted on Twitter this month he said he had taken accountability for conversations with individuals who he said he thought were over 18 and said his legal team was taking action against people who spread misinformation.

Deciding which real-life actions or allegations require online punishments is a thorny area, say online law and privacy experts.

Linking the activity of users across multiple sites is also difficult for reasons including data privacy and the ability to attribute actions to individuals with any measure of certainty, say experts.

But that has not deterred many companies from expanding the practice. Twitch's Clemens said the site was initially focusing on violence and sexual exploitation, but it planned to add other off-site activities to the list: "It's incremental by design," she said.

© Thomson Reuters 2021.

©2021 GPlusMedia Inc.

16 Comments
Login to comment

An open democratic society needs a public square.

A place where anybody can go and express their thoughts publicly and where people can gather to share ideas.

In lockdowns, we can't go to the public square, so we need an online one.

The problem is that the current online public square has a privately controlled on ramp.

6 ( +8 / -2 )

So your enemies no longer have to hack your accounts to destroy your life. They can just pretend to be an evil version of you* on websites you have never heard of, and GAFA will render you an 'unperson'.

The world becomes more dystopian with every passing day.

For Trek fans, think bearded Spock from 'Mirror, Mirror'.
4 ( +7 / -3 )

An open democratic society needs a public square.

Mike Lindell has you covered! But let me be Frank, you can’t take the Lord’s name in vain or use cuss words.

0 ( +2 / -2 )

Facebook Inc's rules .................... People who have murdered one person are mostly allowed, a spokeswoman said, due to the crime's volume.

"How many people did you murder"?

"Only one."

"That's OK then but don't do it again or we will ban you."

"OK, I won't."

4 ( +4 / -0 )

These tech companies really need to have their wings clipped.

0 ( +2 / -2 )

I really recommend reading the US's Justice Thomas's opinion on regulating tech like Twitch as it seems quite appropriate and widely popular: https://www.brookings.edu/blog/techtank/2021/04/09/justice-thomas-sends-a-message-on-social-media-regulation/

0 ( +0 / -0 )

I really recommend reading the US's Justice Thomas's opinion on regulating tech like Twitch as it seems quite appropriate and widely popular:

Treating internet platforms as common carriers as the justice suggests only works if Section 230 of the Communications Decency Act of 1996 that provides websites immunity from lawsuits for third party content is maintained. What won't work is to revoke Section 230, exposing internet providers to lawsuits for what third parties post while simultaneously requiring these same sites to allow anyone to post anything without restriction. If both actions occur, the internet as we know it today will disappear. No company is going to take the risks implied in allowing anyone to post anything as a common carrier while still being held liable for what is posted.

1 ( +1 / -0 )

These tech companies really need to have their wings clipped.

There is more to it than just the "tech companies" involved in abuse. Private employers and public agencies both terminate employees routinely for posts that the employer or public agency deems to be objectionable. The public often demands this of them threatening legal / electoral action if they do not. Schools discipline students for what they post on the internet from their own homes on their families computer. No school time or school resources involved but the schools still punish kids for their after school internet activity. Punishing what is called "cyber bullying" is widely popular and schools that fail to do so expose themselves to potentially crippling lawsuits. Employers often demand to see all of a job applicants social media before considering this applicant for the position applied for. Some employers even demand usernames and passwords.

3 ( +3 / -0 )

Don't agree with their decision

-1 ( +0 / -1 )

An open democratic society needs a public square.

Yes one where we are responsible for our words and actions, not hide behind anonymity and avatars. So I welcome this move

-2 ( +0 / -2 )

Twitch has no right to be judge and jury outside of its own app.

2 ( +2 / -0 )

Twitch has no right to be judge and jury outside of its own app

Just being the devil's advocate here but how do you think Twitch in this example should deal with a member who turns out to be a terrorist? Or maybe has a side business making and selling child pornography? Is Twitch obliged to look the other way for the sake of free speech? Should Twitch face legal repercussions for hosting such objectionable members? Or should Twitch be legally immune? And how does Twitch deal with public outrage if such a person is identified having a Twitch account? If there are public demands to ban this person, and especially if it politicians demand Twitch banish this person, are they obliged to do so? Should they? Like I said, just being the devil's advocate.

0 ( +0 / -0 )

Desert Tortoise

Twitch has no right to be judge and jury outside of its own app

Just being the devil's advocate here but how do you think Twitch in this example should deal with a member who turns out to be a terrorist? Or maybe has a side business making and selling child pornography?

Those would be serious charges against someone without the accused being given a trial and jury of any suspected crime. Who gets to decide if someone is guilty without a trial? In America, there are also First Amendment rights.

Every social network site can control its own apps/sites and ban or suspend those who violate the TOS.

0 ( +0 / -0 )

None, not a single one, not one social website has an obligation, of any kind, to accept your posts.

Why people think they do is stunningly ignorant and unforgivably self centered.

1 ( +1 / -0 )

Regarding schools, if kids do Bad Things in school or using school computers, it is a matter for the school to pursue. If they do Bad Things outside school, it is a matter for their parents, police or social services. They may all then work with the school afterwards, but teachers are not detectives, police officers or judges.

The only real issue for employers would be if someone was leaking company secrets. Anything else is their personal business. Your company does not own you, and in your free time you do not represent your company. You are an employee, not a slave.

The issue nowadays is when someone is targeted by an activist witch hunt online. This is an unpleasant and new form of crowd-sourced bullying, and the solution is to grow a pair and support the victims of it, as you should support victims of any other sort of bullying.

-Some employers even demand usernames and passwords.

Folk should avoid working for a company like that. Huge red flag that they are going to treat you like dirt.

Just find someone else to work for and leave them to employ the desperate, incompetent and unfortunate. It may actually be a crime in some countries to be forced to hand over your password to anyone who is not legally mandated to ask for it. And if you do hand over your password to anyone, your bank, your insurer and your ISP will consider you responsible for any issues that occur.

Internet sites are conduits and platforms, like pubs. To maintain free speech, they need to default to open and not be held liable for content or forced to act as judges beyond the basic remit of their T's & C's, which should be based upon what is and what is not actually lawful. In that case, incitement to violence is criminal, but simply being rude about someone or holding an unpopular opinion is not.

Woke activists, like every other fascist group and political regime before them, simply want to ban, de-platform, isolate and silence everyone whose opinions they disagree with. We need to stand up to this wave of censorship. Censorship has no place in a civilised society. It is a tool of repression. If someone says something that you do not agree with, voice your alternative. Do not burst into tears and run to the hurt feelings police, demanding that they be rendered an unperson.

Governments will use the tech companies as a tool of censorship and oppression by proxy, so they do not look like a bunch of dictators. We've already seen that in China, and in the West with the restrictions on search engines. We should not allow them to co-opt GAFA as a privatised version of the Stasi.

We need to flip to distributed systems in which people take control of what they see on the net and have the option to block from their screens what they do not wish to see. Tech companies would then be, by definition, conduits, as they would have no control over content for governments to force them to exercise through censorship. Content would be the legal responsibility of its creator, as it should be.

0 ( +0 / -0 )

Internet sites are conduits and platforms, like pubs. To maintain free speech, they need to default to open and not be held liable for content or forced to act as judges beyond the basic remit of their T's & C's, which should be based upon what is and what is not actually lawful. In that case, incitement to violence is criminal, but simply being rude about someone or holding an unpopular opinion is not.

But the Republicans argue that Section 230 should be lifted allowing them to be sued over the third party content posted on their web platforms, even as the Republicans also demand the very same tech companies not moderate content calling it censorship. The free market economist in me says that if I own the web platform, I can moderate it to whatever degree I wish. it is not up to the government to tell me I cannot remove an offensive post as that post reflects badly on my company and perhaps me personally. Were I a shareholder I would expect the management of the web platform to protect the company image by eliminating objectionable posts. Just make sure the terms of service spell out clearly the mods have the last word. What you call "crowd funded bullying" isn't. I am old enough to remember the old saying "the customer is always right". If your customers object to what your web platform allows to be posted you stand to lose customers, which is emphatically the customers right to do. There are no beauty queens in this pageant, are there.

0 ( +0 / -0 )

Login to leave a comment

Facebook users

Use your Facebook account to login or register with JapanToday. By doing so, you will also receive an email inviting you to receive our news alerts.

Facebook Connect

Login with your JapanToday account

User registration

Articles, Offers & Useful Resources

A mix of what's trending on our other sites