williampietri,
@williampietri@sfba.social avatar

Reading about , it's impressive to me how little Jack has learned.

As somebody who (briefly) worked at Twitter on anti-abuse, the place is a goldmine of lessons on fucked-up behavior. And the CEO comes out of the cryptocurrency space, which also has a rich history of garbage people run wild.

But instead of something that applies any of that, they have apparently produced the same sort of tired neoliberal marketplace-of-ideas horseshit product that Twitter was to start.

williampietri,
@williampietri@sfba.social avatar

I haven't used it, but from what I've read, Bluesky launched with no block button, and possibly no moderation at all. Which is unconscionable.

Maybe in 2006 you could pretend that the Internet was a nice place of hugs and rainbows. (That was untrue; even founders of the earliest social web sites in the mid 90s have abuse problems, and they were the same problems seen in the pre-Internet social computing spaces.) But today acting like 4chan doesn't exist is just announcing you aren't serious about abuse.

williampietri,
@williampietri@sfba.social avatar

And I've seen suggestions to give them a chance. Respectfully: Nah, I'll pass. Not only because of their launch. But because of what they've written.

For example, look at the survey of existing social media they did 2 years ago: https://gitlab.com/bluesky-community1/decentralized-ecosystem/-/blob/master/topics/moderation.md

That is, charitably, a very light analysis. It acknowledges the "acute problems" for centralized platforms, but doesn't seem to think the same thing applies to decentralized systems. (And let's note here that Bluesky has launched a centralized system.)

What's missing? A single example of a problem, for one. Let alone any serious look at what makes moderation necessary. Or how the mechanisms so briefly described work and don't work with particular problems.

williampietri,
@williampietri@sfba.social avatar

Or we could look at this, from just 2 weeks ago:

https://blueskyweb.xyz/blog/4-13-2023-moderation

This at least has examples of possibly problematic content: "spam" and "nsfw". But what's conspicuously missing is other kinds of content that often gets moderated. Which is weird given that plenty of places, Twitter included, have lists: https://help.twitter.com/en/rules-and-policies/twitter-rules

williampietri,
@williampietri@sfba.social avatar

Imagine if they had looked at those lists and asked, "Will this mechanism solve these problems?" Assuming they just didn't blink and say, "Indeedily doodily", they would have recognized that they had to scrap it and start over.

For example, let's take the famous Leslie Jones dogpile of 2016: https://www.cbc.ca/news/canada/leslie-jones-twitter-trolls-point-of-view-1.3690404

It was bad enough both for Jones and for Twitter's PR team that it was a turning point inside Twitter. The "free speech wing of the free speech party" was a fine abstract point, beloved by white dudes who didn't want anybody to tell them no. But it didn't work well in practice for everybody else.

How would Bluesky solve this? Apparently by hoping Jones had gone deep enough into the settings to turn off the "racist abuse" setting. Assuming that somebody, somewhere was labeling the racist abuse fast enough, of course.

williampietri,
@williampietri@sfba.social avatar

But even if that worked (and see my previous feelings on neoliberal marketplace-of-ideas fantasies), that still leaves you with a racist mob feeling bold and looking for new targets. Looking for ways around the (inevitably leaky) automated labeling.

So what they're going to do is continue with their digital ethnic cleansing campaign, congratulating each other all the while.

williampietri,
@williampietri@sfba.social avatar

As an aside, let me add that ethnic cleansing is as American as pie. I strongly recommend everyone read Loewen's "Sundown Towns", which carefully documents a hidden era of ethnic cleansing across America. And looks at how the cultural traditions linger. I think it's essential for understanding the age we're in, the second Nadir. https://www.amazon.com/Sundown-Towns-Hidden-Dimension-American/dp/0743294483

sharonecathcart,
@sharonecathcart@sfba.social avatar

@williampietri

That book was an eye-opener, for sure.

dbsalk,
@dbsalk@mastodon.social avatar

@sharonecathcart @williampietri Loewen's Lies My Teacher Told Me is also eye-opening. I read it about 15 or 20 years ago and I still think about the opening chapters every October when idiots try to defend Columbus Day.

https://www.barnesandnoble.com/w/lies-my-teacher-told-me-james-w-loewen/1100185227

sharonecathcart,
@sharonecathcart@sfba.social avatar

@dbsalk @williampietri

Agreed. That book was amazing ... and frustrating ... because it's information we all should have had long before high school graduation.

williampietri,
@williampietri@sfba.social avatar

@sharonecathcart @dbsalk

Truth! I learned so much that I should have already known. E.g., I grew up in Michigan. My mom's family were in real estate, so I knew the name of every podunk town on the west side of the state. Except Idlewild! https://en.wikipedia.org/wiki/Idlewild,_Michigan

williampietri,
@williampietri@sfba.social avatar

But back to . An essential dimension of moderation is asking the question, "On who does the burden fall?"

If the burden of dealing with racism falls on the targets of racism, then congrats, you've just created something with built-in systemic racism.

Some of that is inevitable. The true shitbags are tireless. They will be probing and subverting your defenses. So you're at least in part going to have to rely on user reports. So you have to work hard to minimize that.

I see no evidence of that here. Or even awareness of the problem.

williampietri,
@williampietri@sfba.social avatar

How did this happen? Well, you could look at their initial team announcement:

https://blueskyweb.xyz/blog/2-31-2022-initial-bluesky-team

Or their jobs page (the history of which you can see on the Wayback machine):

https://blueskyweb.xyz/join

Or who LinkedIn thinks is working there:

https://www.linkedin.com/search/results/people/?currentCompany=%5B%2279571598%22%5D&origin=COMPANY_PAGE_CANNED_SEARCH&sid=8VD

I don't see anybody there with expertise in these problems. There's definitely nobody whose job it is to think about this. So we have the classic approach of "build for the comfortable, worry about anybody else later if at all".

williampietri,
@williampietri@sfba.social avatar

And I'd add that all of these problems, as Bluesky is learning, are hard enough with a centralized service. They get enormously harder with something decentralized.

A key dynamic is whether the anti-abuse people can adapt faster than the pro-abuse dirtbags. Once you are baking decisions into distributed protocols, your speed of iteration drops radically. Even with all of Twitter's centralized power, the place spent years just filled with abuse. A distributed product has to be more on the ball, not less.

williampietri,
@williampietri@sfba.social avatar

So in conclusion, I think is off to a terrible start and that nobody should use it. I get that Jack is persuasive and good at fooling people. As a recovering Twitter user, I also get that people just want those feelings back.

But as things stand now, Bluesky looks like it will be worse than Twitter in terms of abuse, and that will fall disproportionally on marginalized people. Ask yourself if you want to lend your personal slice of credibility to that. Whether you want to generate free content for a billionaire, the same billionaire that fucked this up last time.

williampietri,
@williampietri@sfba.social avatar

Addendum: This is helpful (and very colorful) look from @chuckwendig that confirms that Bluesky doesn't have a block button: https://terribleminds.com/ramble/2023/04/28/social-media-report-card-time-to-reskeet-the-blooski-apparently/

williampietri,
@williampietri@sfba.social avatar

Also, by saying all this, I don't want to suggest that Mastodon is better (or that it isn't). I happen to be on an instance where the admins seem to take moderation pretty seriously. But I haven't studied what's going on at scale, and the replies here raise real concerns:
https://mas.to/@BlackAzizAnansi/110278023405568530

williampietri,
@williampietri@sfba.social avatar

Another good addendum: a series of posts from the Bluesky CEO on how unprepared they were and how thoroughly they're half-assing all this: https://mastodon.social/@atomicpoet/110279204795922359

mike805,

@williampietri so what is wrong with having some places that are absolute free speech (and probably a dumpster fire of abuse) and some places that are safe spaces with puppies and padded walls? The rest can be in the middle. At least you will put the abusers all in one place, and the people who don't want to be abused just don't go there.

Here there was an infamous ham radio repeater called 435 which was full of abuse, illegal foul language, often jammed, etc. It kept the other repeaters clean.

womble,

@mike805 @williampietri as places like kiwifarms, *chan, et al, have shown, the cess pools don't contain the filth, they breed it, and it leaks out to contaminate everything else.

Skepticat,
@Skepticat@mstdn.social avatar

@williampietri
Great thread. Thanks so much for posting this.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • internet
  • rosin
  • thenastyranch
  • anitta
  • normalnudes
  • GTA5RPClips
  • DreamBathrooms
  • mdbf
  • magazineikmin
  • Youngstown
  • ngwrru68w68
  • slotface
  • InstantRegret
  • kavyap
  • cubers
  • tester
  • cisconetworking
  • provamag3
  • modclub
  • everett
  • osvaldo12
  • khanakhh
  • Durango
  • Leos
  • megavids
  • ethstaker
  • tacticalgear
  • JUstTest
  • lostlight
  • All magazines