siderea, (edited )

There are two problems that are coming for Mastodon of which apparently an awful lot of people are unaware. These problems are coming for Mastodon not because of anything specific to Mastodon: they come to all growing social media platforms. But for some reason most people haven't noticed them, per se.

The first problem is that scale has social effects. Most technical people know that scale has technological effects. Same thing's true on the social side, too.

🧵

CC: @Gargron

siderea,

For instance, consider the questions "How likely, statistically speaking, are you to run into your boss on this social media platform?" and "How likely, statistically speaking, are you to run into your mother on the social media platform?" While obviously there is wide individual variation based on personal circumstances, in general the answer to those questions is going to be a function of how widespread adoption is in one's communities.

Thing is, people behave differently on a social media platform when they think they might run into their boss there. People behave differently when they think they might run into their mother.

And it's not just bosses and mothers, right? I just use those as obvious examples that have a lot of emotional charge. People also behave differently depending on whether or not they think their next-door neighbors will be there (q.v. Nextdoor.com).

🧵

krupo,

@siderea you raise a lot of thoughtful points here. There's definitely a "this is a niche place with less griefers" element at play, however a few design factors may help with "natural' policing by the community:

  • no algo-driven timeline to "reward" nasty griefers with attention
  • the risk of instances being defederated is similar to the example of university admins being told to deal with a problem (time will tell how effective but it's a similar parallel)
  • longer text limits allow and I'll argue encourage more nuance than pithy angry one-liners
  • again, not being corporate sponsored means there's no marketing push with hard dollars to encourage mass joiners, keeping growth slower
siderea,

How people behave on a social media platform turns out to be a function of whom they expect to run into – and whom they actually run into! – on that social media platform. And that turns out to be a function of how penetrant adoption is in their communities.

And a problem here is that so many assume that the behavior of users of a given social media platform is wholly attributable to the features and affordances of that social media platform!

It's very easy to mistake what are effects of being a niche or up-and- coming platform for something the platform is getting right in its design.

The example I gave about people behaving differently depending on what the likelihood is they estimate of running into certain other parties in their lives is not the only example of how scale affects how people engage with a social media platform. There are others that I know about, and probably lots I don't.

🧵

siderea,

For instance, tech people are probably aware of the phenomenon that virus writers are generally more attracted to writing viruses for platforms that have more users. This is one of the main reasons that there are (and have always been) fewer viruses written against the macOS than Windows.

You've probably never thought of it this way – mad props to the article in Omni I read a long time ago that brought this to my attention – but writing a virus is a kind of griefing. Like in a game. It's about fucking up other people's shit for kicks and giggles, if not for profit, and doing so at scale.

Well, griefers – people who are motivated by enjoying griefing as a pastime – are going to be more drawn to bigger platforms with more people to fuck with.

Deliberate malicious obnoxiousness and trolling varies not linearly with population size, but geometrically or worse.

🧵

siderea,

Or put another way, a social media platform can avoid a certain amount of social griefing just by being small, and therefore not worth the time of griefers who are looking for bigger fish to fry. As that platform grows, it loses that protection.

So you can't tell, not for sure, how good a platform's systems are for managing that kind of griefing until it gets big enough to really start attracting griefing at scale.

🧵

siderea,

So that's one problem: there are simply social size effects, that affect how people behave on a social media platform, so as the platform grows in adoption, how people behave on it will change. Usually not in ways that are thought of as for the better, because being a niche platform can avoid various social problems that can no longer be avoided as it grows.

The other problem I think is even more fascinating.

When a social media platform is founded, there are filter effects on who joins that platform. But as a social media platform grows, those filters – some of them – fall away.

🧵

siderea,

When I talk about filters, I mean things like the following famous examples:

  • When Facebook was founded, it was only for students at universities; one could only sign up for it with a college email address. Consequently, Facebook's early userbase was almost entirely college students – with all that implies for socioeconomic class.

  • When G+ was founded, it was initially opened to Google employees, and used an invite code system for rollout, such that overwhelmingly its early users were people in the same social worlds as Googlers.

  • In the heyday of USENET, the vast majority of internet users, at all, were college students who are majoring in technical topics.

These social spaces, consequently, inherited (in the object oriented sense) the social norms of the demographics that initially populated them.

🧵

siderea,

Regardless of the specifics of what different platforms' initial userbases are, one of the fascinating consequence of having such filters is a higher level of social homogeneity.

I know it doesn't seem like a very high level of social homogeneity when you're in it. "What are you talking about, lady?! We have both emacs users AND vi users!"

But in a way that is largely invisible at the time to the people in it, they're in a kind of cultural bubble. They don't realize that a certain amount of social interaction is being lubricated by a common set of assumptions about how people behave and how people should behave.

Now they may not like those assumptions very much, they may not be very nice assumptions or ones they find are very agreeable. But they're known. Even if unconsciously or inchoately. And that turns out to count for a lot, in terms of reducing conflict or making it manageable.

🧵

siderea,

But, of course, as a social media platform grows, those filters change or fall away.

Facebook expanded enrollment to high school students, then dropped the requirement of an educational affiliation all together.

AOL, which at the time was mailing physical install media to every mailing address in the United States, unsolicited, repeatedly, plugged itself into USENET and opened the floodgates in an event that is referred to as the September That Never Ended.

(For those of you who don't know, that term refers to the fact that previously, large numbers of clueless users who didn't know how to operate USENET only showed up at the beginning of the American academic year. AOL not being tied to the academic calendar and having large numbers of new users every day, effectively swamped the capacity of USENET culture to assimilate new members by sending a September's worth of cluelessness every month forever thereafter.)

🧵

siderea, (edited )

Additionally, as a social media platform becomes more popular, it becomes more worth the effort to get over the speed bumps that discourage adoption.

We've already seen this with regards to Mastodon. Where previously an awful lot of people couldn't be bothered to figure out this whole federation, picking-a-server thing to set up an account in the first place, of late it is seemed much more worth the effort of sorting that out, not just because Twitter sucks and its users are looking for an alternative, but because Mastodon has become more and more attractive the more and more people use it.

So people who once might have been discouraged from being Mastodon users are no longer discouraged, and that itself is the reduction of a filter. Mastodon is no longer filtering quite so much for people who are unintimidated by new technologies.

Now you might think that's a good thing, you might think that's a bad thing: I'm just pointing out it IS a thing.

🧵

siderea,

Over time, as a social media platform becomes more and more popular, its membership starts reflecting more and more accurately the full diversity of individuals in a geographic area or linguistic group.

That may be a lovely thing in terms of principles, but it comes with very real challenges – challenges that, frankly, most people are caught entirely by surprise by, and are not really equipped to think about how to deal with.

Most people live in social bubbles to an extent that is hard to overstate. Our societies allow a high degree of autonomy in deciding with whom to affiliate, so we are to various degrees afforded the opportunity to just not deal with people that are too unpleasant for us to deal with. That can include people of cultures we don't particularly like, but it also includes people who are just personally unpleasant.

🧵

siderea, (edited )

Many years ago, at the very beginning of my training to become a therapist, I was having a conversation with a friend (not a therapist) about the challenges of personal security for therapists.

She said, of some example I gave of a threat to therapist safety, "But surely no reasonable person would ever do that!"

"I'm pretty sure," I replied, "the population of people with whom therapists work is not limited only to people who are reasonable."

I think of that conversation often when discussing social media. Many of the people who wind up in positions to decide how social media platforms operate and how to try to handle the social problems on them are nice, middle class, college educated, white collar folks whose general attitude to various social challenges is "But surely no reasonable person would ever do that!"

🧵

siderea,

As a social media platform grows, and its user base becomes more and more reflective of the underlying society it is serving, it will have more and more users on it who behave in ways that the initial culture will not consider "reasonable".

This is the necessary consequence of having less social homogeneity.

Some of that will be because of simple culture clash, where new users come from other cultures with other social expectations and norms. But some of that will be because older users weren't aware they were relying on the niche nature of the platform to just avoid antisocial or poorly socialized people, and don't really have a plan for what to do about them when they show up in ever greater numbers, except to leave, only now they can't leave, not with impunity, because they're invested in the platform.

So the conflict level goes up dramatically.

🧵

siderea,

As a side note, one of the additional consequences of this phenomenon – where a growing social media platform starts having a shifting demographic that is more and more culturally and behaviorally diverse, and starts reflecting more and more accurately the underlying diversity of the society it serves, and consequently has more and more expressed conflict – is that a rift opens up between the general mass of users, on the one hand, and the parties that are responsible for the governance of the social media platform, on the other.

This is where things go really sour.

That's because the established users and everyone in a governance position – from a platform's moderators to its software developers to its corporate owners or instance operators – wind up having radically different perspectives, because they are quite literally witnesses to different things.

🧵

siderea,

The established users, who are still within their own social bubbles, have an experience that feels to them like, "OMG, where did all these jerks come from? The people responsible for running this place should do something to fix it – things were fine here the other day, they need to just make things like they used to be. How hard could it be?" They are only aware of the problems that they encounter personally, or are reported to them socially by other users or through news media coverage of their platform.

But the parties responsible for governance get the fire hose turned on them: they get to hear ALL the complaints. They get an eagle's eye view of the breadth and diversity and extent of problems.

Where individual users see one problem, and don't think it's particularly difficult to solve, the governance parties see a huge number of problems, all at once, such that even if they were easy to solve, it would still be overwhelming just from numbers.

🧵

anne_twain,
@anne_twain@theblower.au avatar

@siderea My experience aligns with what you're saying. I had difficulties in a wiki community where the people responsible for conflict resolution were volunteers-that is, self appointed without having any kind of qualifications or even referrals from other users. Some of them were unsophisticated, dare I say uneducated and gullible.

The wiki boasts that it protects its contributers but frankly if I'd had any idea how open to harrassment, stalking and downright stupidity I would be, I would never have got started.

So - volunteer, unqualified moderators are a problem and as you say, will be burdened even more in a large user group.

In part, the troubles I had were due to cultural differences. The difference between people speaking Russian and English may be obvious, but the differences between Australians and Americans are more subtle and misunderstandings are exacerbated if the American party thinks there are no differences and what you said means exactly what they think it means.

BillySmith,
@BillySmith@social.coop avatar

@siderea

Only part-way through this thread, and recognising a lot of this. :D

I'd be interested in being a fly-on-the-wall to a conversation between yourself and @ifixcoinops who has been running Improbable Island for over a decade.

He noted some of his experiences here:

https://retro.social/@ifixcoinops/111008076839685387

siderea,

But of course they're not necessarily as easily solved as the end users think. End users think things like, "Well just do X!" where the governance team is well aware, "But if we did X, that might solve it for you, but it would make it worse for these other people over here having a different problem."

The established users wind up feeling bewildered, hurt, and betrayed by the lack of support around social problems from the governance parties, and, it being a social media platform, they're usually not shy about saying so. Meanwhile, the governance parties start feeling (alas, not incorrectly) their users are not sympathetic to what they're going through, how hard they're working, how hard they're trying, and how incredibly unpleasant what they're dealing with is. They start feeling resentful towards their users, and, in the face of widespread intemperate verbal attacks from their users, sometimes become contemptuous of them.

🧵

siderea,

The dynamic I just described is, alas, the best case scenario. Add in things like cultural differences between the governance parties and the users, language barriers, good old fashioned racism, sexism, homophobia, transphobia, etc, and any other complexity, and this goes much worse, much faster.

🧵

siderea,

For anyone out there who is dubious about this difference in perspective between the governance parties and the end users, I want to talk about the most dramatic example of it that I personally encountered.

There used to be on LiveJournal a "community" (group discussion forum) called IIRC "internet_sociology". Pretty much what it sounded like, only it was way more interested in the sociology (and anthropology) of LiveJournal itself, of course, than any of the rest of the internet.

Anyways, one day in, IIRC, the late 00s, somebody posted there a dataviz image, of the COMPLETE LiveJournal social graph.

And that was the moment that English-speaking LiveJournal discovered that there was an entirely other HALF of LJ that was Russian-speaking, of which they knew nothing, and to which there was almost no social connection.

🧵

siderea,

For LJ users who had just discovered the existence of ЖЖ, it was kind of like discovering the lost continent of Atlantis. The datavis made it very clear. It represented the social graph of the platform they were on as two huge crescents barely connected, but about the same size. And all along, the governance parties of LJ were also the governance parties of ЖЖ.

And it turns out, absolutely unsurprisingly, LJ and ЖЖ had very different cultures, because they had had different adoption filters to start out with. LJ initially had been overwhelmingly adopted by emo high school students as a diary platform (LJ once jokingly announced it was adding an extra server just to index the word "depression".) ЖЖ had initially been adopted by politically active adults – average age, in their 30s – as a blogging platform.

🧵

siderea,

Turns out, also absolutely unsurprisingly, these two populations of users wanted very different features, and had quite different problems.

One of the ways LJ/ЖЖ threaded that needle was to make some features literally contingent upon the character set a user picked. LiveJournal literally had "Cyrillic features": features that had nothing to do with the character set itself, but that only turned on for an account if it elected that character set.

🧵

siderea,

Also unsurprisingly, when a Russian company bought LJ/ЖЖ from an American company, the governance parties started prioritizing the ЖЖ users' issues and feature requests, to the considerable confusion and distress of the LJ users who were unaware of the entire existence of ЖЖ. "Why on Earth would we want a feature that does this? Why would they think we would want it? Is LJ insane? What are they trying to make this place?" No, whatever feature it was actually was a pretty attractive one for someone who's a political blogger trying to maximize their reach, i.e. ЖЖ users.

You can see how a pretty enormous rift can open up between end users, who have literally no clue as to some of the most basic facts of the platform – like, say, entirely 50% of the user base is radically different from them in language and culture and usage patterns and needed affordances – and the governance parties who are trying to juggle all the anvils, some of which are on fire.

🧵

siderea,

There's a little exercise one can do, if one is an enduser of a social media platform (or for that matter a governance party to a niche social media platform that has yet to hit the upslope of the diversity wave) and one wants a better sense of what governance parties have to deal with. If you've ever had a retail job dealing with the general public, just remember what the general public was like to deal with when you waited tables or ran a register or took orders or answered phones.

And if you've never had such a job yourself, or it's been a while, take yourself to a place like Reddit's r/TalesFromRetail or r/TalesFromYourServer and check out the sorts of things people who deal with the general public find themselves having to deal with.

And then reflect on this: all those irrational, entitled, belligerent, obnoxious people are loose in the world, and as your social media platform grows, it will eventually hoover THEM up from the bottom of the pool.

🧵

siderea,

Because that – and worse (so very much worse) – is what your governance parties have to deal with.

I don't just mean governance parties have to deal with rude people being rude to them. First of all the problem is so much worse than mere rudeness, and social problems extend far beyond problems between two parties being in some sense in conflict. But secondly, and more importantly, it's made their problem when someone is "rude" to someone else. They don't just have to deal with obnoxious people being obnoxious at them, they have to in some sense do something about obnoxiousness in general, and are often put in the position of having to show up and confront the obnoxious person in some sense, or otherwise do something to frustrate the obnoxious person, which will probably not make them less obnoxious and also bring the governance party to the attention of the obnoxious person.

🧵

siderea,

And if you are yourself a governance party who finds yourself having more and more difficulty empathizing with and respecting your end users, maybe remember what it was like to be an end user, and to largely be helpless to handle all sorts of social problems oneself, and to be stuck relying on authorities who may be unsympathetic, actively hostile, and/or just both clueless and clue-resistant.

I mean, just reflect on what it was like to be a Twitter user over the last year. Only don't let yourself use the cop out of "But you don't have to be a Twitter user, you can leave Twitter."

🧵

siderea,

A lot of people, especially on the Fediverse, wind up being governance parties precisely because they don't want to be disempowered anymore. They want to be the people who make decisions about how to solve social problems on their social media platform of choice, and Mastodon/etc makes that much easier than trying to get a job with Twitter's Trust and Safety team.

So it's worth remembering if you are a governance party on the Fediverse, that's great for you, you're all empowered by that arrangement – but your end users are still end users. They get search if you choose to give them search. They still rely on your sitting in judgment of the reports they file on other users to take action on bad actors on their instance. They still experience themselves as largely just as disempowered as they were on Twitter. They have a choice of what lord's fields to till, but they're still peasants.

🧵

siderea,

But I digress.

🧵

siderea,

Returning to my larger point about the two problems that are coming for Mastodon: I'm seeing a lot of people make a lot of assumptions about how well things are working, in terms of solving social problems, that are basically predicated on not knowing that these two problems are bearing down on us.

This puts me in the weird position of actually arguing against empiricism. I'm usually a big fan of "throw it against the wall and see if it sticks" experimentalism as a remedy for head in the clouds theorizing.

But this is really a situation in which foresight is desperately necessary.

It is simply not accurate to extrapolate the efficacy of various attempts to solve social problems on Mastodon based on how well they've worked so far.

When you're climbing an adoption curve, past performance is not a guarantee of future results.

🧵

siderea,

A couple decades ago, Clay Shirky gave a talk which he then published as an essay, "The Group Is Its Own Worst Enemy", about how over and over and over again people who develop online social spaces get surprised by things that happened on their online space – thing which had happened previously on OTHER parties' online social spaces, and which those social spaces' governance parties had attempted to warn others about.

Now, I have a bunch of reservations about specific details in that essay, but he was sure right about how over and over and over again Bad Things happen to social platforms, and the governance parties who lived through them try to warn others, and they're pretty reliably ignored.

Maybe we could not do that this time?

🧵

weekend_editor,
@weekend_editor@mathstodon.xyz avatar

@siderea

You will probably like David Chapman's essay, "Geeks, MOPs, and Sociopaths".

It's about how communities always get invaded by those who wish to USE the community instead of BE the community.

https://meaningness.com/geeks-mops-sociopaths

siderea,

@weekend_editor I might – I'll certainly check it out – I'm just a little dubious that I'm going to find value in anything that can be described with the framing of "those who are the community versus those who use the community".

I was going to explain why, but then an in vivo example showed up in the other reply you got. There is always somebody who will be along shortly to explain why some other perfectly prosocial and usefully contributing demographic aren't really members of a community because they somehow benefit from being members of that community, so they're just using it.

siderea, (edited )

[break over, resuming]

Now, I certainly don't have a proposed one right answer to what a social media platform should be doing to solve all of these ensuing problems, and I certainly hope nobody thought I did.

But what I do have to propose is a set of attitudes and approaches to building out a social media platform to try to avoid some of the bad outcomes that other platforms have experienced.

My biggest point here is to simply not have a kind of foolish hubris of thinking that because something hasn't been a problem so far, that it's been solved.

As with so many things, I think it helps enormously to look into the history of previous attempts to get advanced warning of the circumstances one may find oneself in. And, of course in the case of social media, by "may" I mean "almost certainly will".

There are things that most definitely do not need to be surprises anymore.

🧵

siderea,

And I want to point out something else that's probably crucial to learning from past mistakes.

When we build a social media platform – when we build anything to allow people to interact in the internet – we are doing something very like building a planned city. We are making decisions about the structures through which people will flow and move and rest and encounter one another and interact with one another.

When architects are designing physical buildings and when urban planners are laying out physical cities, they make decisions about physical structures with the intention of those structures shaping human behavior. People who build amphitheaters are people who want there to be public addresses that many people here, whether political speech or entertaining theater. People who build temples are people who want there to be collective religious worship. People who build roads want there to be travel.

🧵

siderea,

Of course architects can choose to build buildings to meet other criteria, besides the effects on the people that interact with them. They can choose to make buildings that support the environment, or save the owners' money, or achieve some political end. They can also build buildings to have social effects not just through their affordances but through aesthetics, such as being beautiful to improve a neighborhood's appearance or to aggrandize an aristocracy.

But primarily buildings are built to be used, and as such they are tools, and we judge them, as we do all tools, by how fit they are for their purpose, whatever that might be.

And the purposes of buildings are to afford various ways of people interacting or avoiding interacting.

So architects think a lot about that. It's a whole thing.

Those who put together social media platforms need to think about the same sort of thing.

🧵

siderea,

We need to be very conscious that the decisions that are made of how a platform works are decisions that affect how the people who use that platform will interact.

There should be a kind of intentionality – which is something I think Mastodon is doing way better at than a lot of social media projects – around functionality decisions.

But that intentionality has to go beyond merely meaning well. Good intentions poorly informed result in bad outcomes that were never intended but are, nevertheless, still bad.

There is a lot to be said for realizing that decisions for how social media platforms work are deliberate attempts to shape – to engineer really – human social life on a huge scale. On a scale so huge in fact, that it is not wrong to describe it as trying to engineer societies.

🧵

siderea,

It's unfortunate that the term "social engineering" has a previous meaning as a slang term among computer programmers for a kind of attack on a system that leverages human frailty as opposed to faults in the software, because this – the design of social media platforms – is truly social engineering.

From where I sit, with a foot in both the technological and the social sciences, it seems really clear to me that there is no general sense that there is such a field as the engineering of online society. Not their underlying technologies, but the use of technological deployment to instantiate social spaces, that bring about certain social realities.

This is not a thing that is taken seriously. To the contrary, it's treated quite lightly.

🧵

siderea,

The social media world is filled with people just pulling ideas out of their asses and hoping it all works out.

Folks who have been around the block a few times in a governance role have started amassing a body of lore. Case studies, observations they made in the trenches.

At the very least, availing oneself of what they have to share is a good first step.

But if we were to take this seriously as engineering, well, that suggests a few things, doesn't it?

It suggests we get a little bit more sciency about this. It suggests we start imposing a little bit of rigor.

🧵

siderea,

Engineers tackle well-specified problems, and if the problems they are asked to tackle are not well-specified, they'll either nope out or they'll come up with their own spec.

It would probably do us good to spec out problems we think we're solving more precisely.

I cannot tell you how many conversations I have seen about the topic of "moderation" and how necessary it is in which nobody has ever bothered to set down what exactly it is that they think a moderator is supposed to accomplish.

I mean, it's all of them. I've been on the internet since the 1980s, and I have never seen anyone stop and actually talk about what they thought moderators were trying to do or should try to do.

That makes it a little tricky to evaluate whether or not moderators are given adequate tools to do their jobs. What with not actually having any agreement or understanding or even specification of what those jobs are.

🧵

siderea,

This specific example is on my mind in part because of reading @kissane's article on Facebook's role in the genocide of the Rohingya in Myanmar. One of the things it mentions is that Facebook's internal apparatus for what we might call moderation was its "bullying-focused 'Compassion Team'". Like many social media platforms constructed by the sorts of people who construct social media platforms, Facebook construed the problem of moderation being one of preventing or discouraging interpersonal conflict on the platform.

But the problem unfolding in the Burmese-language parts of Facebook was not people disagreeing with one another or expressing conflict with one another. It was their agreeing with one another.

Agreeing to go kill their neighbors.

This was not something that was even on Facebook's radar, apparently.

🧵

siderea,

This raises some very fundamental and quite interesting questions about what the role of moderation is on a social media platform. Is it the job of a social media platform to prevent people from using it to collaborate to commit crimes?

Historically, a lot of people who have put together social media platforms have insisted it is absolutely not the job of the platform – or the people who run it – to do that.

But if it's not the job of the platform to do that, whose job is it, when a platform, by its affordances, makes real world crimes – horrendous, very serious "real-world" crimes like actual genocide – not just more likely, but so much more likely they are effectively enabling a crime that wouldn't otherwise happen?

Why should our societies – our larger, meat-world societies – tolerate the building and operating of social media platforms that destabilize them and are detrimental to them?

🧵

siderea,

Or put another way, why should our societies tolerate the existence of irresponsibly designed and operated social media platforms, that increase violence and other antisocial behavior?

So it turns out the failure of internet culture to actually have a discourse around what even moderators are supposed to be doing is a literally lethal mistake.

And this example is merely one wrinkle in the much, much larger conversation about what moderation is, and the diversity of things that it can be, and maybe should be.

A conversation that has to happen before you can have the conversation that goes, "Okay, of the things that moderation can be, which things do we think it needs to be on our platform, and what do we need to do, in the design of our platform, to bring it into existence and make it work the way we think it should?"

🧵

siderea,

Consider what has unfolded recently with Reddit turning off its API, such that tools its moderators relied on are no longer available to them. Reddit's structure is that it allows anyone to start their own forum and gives them authority to moderate it however – to a first approximation – they see fit. But it doesn't provide the tools necessary – nor, any longer, allow third parties to provide those tools – such that many moderator functions can be performed, so there's a limit to what kinds of moderation can happen there, and how well it can be acquitted. This has literally changed what kinds of conversations and what kinds of forums can happen on Reddit.

🧵

siderea,

Now, I'm not party to what's happening inside Reddit. I don't know the logic of their decisions. But I do know a whole lot of very thoughtful Reddit users who have spaces they moderate on Reddit have explained in great detail and length ("Concision is not our brand." - a mod from r/AskHistorians explaining on Twitter about this very thing) what their needs are and why they were objecting to Reddit turning off the API.

Reddit corporately decided that supporting those affordances was unimportant, or at least less important than something else that conflicted with them.

Reddit made a design decision that changed the nature of what moderation could mean on Reddit. They reduced its scope. That, in turn, changed how moderators could interact with the users they moderated, and that in turn changed how users interacted with one another.

🧵

siderea,

Like I said, I'm not party to Reddit's internal corporate thinking. But I think it's a pretty good educated guess to say: Reddit's decision was not based upon what would optimize Reddit's social functions. When Reddit made this decision, I'm feeling pretty confident it was not a social engineering decision. It was not made to make Reddit function better in some social sense. Nobody made this decision thinking, "Actually, reducing the capacity of moderators to do tasks that are part of moderation will actually improve the social reality of Reddit in this particular way."

At very best, this decision was made to optimize something else in full awareness, "Yes, this will be detrimental to Reddit's social world, but it can't be helped, because of other considerations that outrank quality of social engineering right now."

But of course, the social effect on Reddit might have been simply dismissed, or discounted.

🧵

siderea,

It's a pretty common thing for people to scoff at the idea that the affordances of a platform have something to do with how people behave on it, and that if you make the wrong decisions about how your platform operates you'll get outcomes that you won't like.

It's a pretty common thing for people to take the attitude, "Oh, geeze, what difference does it make whether this feature exists? People will do whatever they want to do anyway."

One of the things Mastodon has going for it is a userbase that mostly doesn't cop out like that. Mastodon is full of plenty of people who believe deeply that how the software works and what its affordances are actually a matter immensely to how social life on Mastodon unfolds.

The problem isn't convincing Mastodonians that these things matter – it's convincing them to not take their first impressions from traumatic experiences on Twitter as gospel truths.

🧵

siderea,

It's probably pretty easy for Reddit executives to sniff and say, "Well you know the user base, they're making a big to-do about nothing; they'll figure out how to moderate without those tools, they're hardly critical."

Mastodonians are smarter than to do that, but we have a bit of a problem of falling down at the next step. It's great that people here don't scorn the idea that affordances matter to user behavior, but the next step is to actually find out how affordances actually do affect user behavior.

Like, we could imagine a more enlightened Reddit not cutting off its moderators at the knee by shutting down the API access they needed. But we could also imagine an even more enlightened Reddit than that, one that built its own versions of those moderator tools right into its own platform, so that moderators didn't have to use third party tools across the API.

But we could also imagine an even more enlightened Reddit than that.

🧵

siderea,

We could imagine an alternate reality, even more enlightened Reddit, that not only had its own built-in moderator tools, it actually was concerned with the question of whether or not it had the right moderator tools, and how those tools affected how Reddit functioned, socially.

It might do things like hold focus group discussions with moderators, send anthropologists into various subs to observe the behavior of moderators and to virtually shadow moderators going about their moderating tasks, A/B test moderator tools, have opt-in betas of new moderator tools.

You know, basic grown-up company stuff, when a company actually cares about how its software functions.

But not just that.

🧵

siderea,

We could imagine an ultra enlightened Reddit that actually has opinions about how it wants its social world to function, and actually makes decisions like, "We don't like some things that we think are maybe a product of moderators not providing 24/7 coverage of high volume groups, so we're going to find out if there's some way, or ways, plural, of solving that problem. We'll investigate whether there are things we can do to either facilitate moderators providing 24/7 coverage, or obviating the need for moderators to provide 24/7 coverage, and then we'll evaluate whether or not it worked to remedy the things that we saw as problems that we think are being caused by that."

In the social services field, this crucial last bit is called "program evaluation".

🧵

siderea,

I think of it as sort of like calling one's shots in billiards. You decide what change you would like to see in the system you are designing for, you come up with an "intervention" (based on studying the problem, reading into other people's approaches to trying to solve the problem, and maybe doing a bunch of rounds of iterative experimentation), you decide, up front, how you will determine whether or not the problem has been solved, and then you implement the intervention, and then you check those criteria you previously identified as the ones that will determine whether or not the problem has been solved.

This is what I mean by rigor. This is pretty sciencey, no? It's not necessarily a controlled experiment, but it does have the form of an experiment. But it's not a mere experiment, either. It's not just a trial to see whether or not something will work. It's an attempt to actually do something that will work. With some slightly more rigorous testing as to whether it did.

🧵

siderea,

Part of what makes it so rigorous is how formal it is, with that business of deciding in advance what the criteria of success will be. That in turn requires a certain amount of serious thinking about social phenomena, and actually getting explicit about things that, frankly, usually just get hand waved through when we're talking about social media platforms. Questions like "what even do we expect our moderators to be achieving?"

It means doing hard things like asking, "Okay, we want people to 'feel more safe': how will we be able to tell that people are feeling more (or less) 'safe'? If people were feeling more or less safe, how would we know? How would we be able to measure it? To observe it in the data? What is it that we are assuming will change in people's behavior based on how safe they feel?"

In the social sciences, this is called "operationalizing" an abstraction or concept or feeling.

🧵

siderea,

One of the things you might notice from the fact that now twice I've mentioned there's other fields that have technical terminology that applies here: Oh hey. There are other fields that have clues that pertain. They have methodologies and other cool toys you might want to play with.

Engineering is, to a first approximation, applied science. So if you want to engineer socials, you might want to start hitting up the social scientists and people in other fields that apply social science.

This is something else Mastodon has going for it: it's got social scientists around here somewhere.

Now, I appreciate what I propose here has a bootstrapping problem. I don't know whether any of the decision makers about code, protocols, or individual instances have the capacity to enlist the help of the people on Mastodon with those professional clues, and I'm not sure Mastodon has the affordances to bring them together.

🧵

siderea,

But all the other challenges of connecting those parties aside, there's the one of hubris, that I want to circle back to.

Part of the reason that so many social media platforms, over and over and over and over again, found themselves shocked and surprised by things happening to them that other, earlier, social media platforms went through (and tried to tell them about) was because they assumed they knew all there was that they needed to know. They assumed their knowledge was adequately complete. They didn't go seeking out information and counsel to guide them, because they thought they didn't need it.

It never even occurred to them they did.

And that's a kind of hubris. It's a kind of low-key, chill intellectual arrogance. It's not blustery, it doesn't brag. It just assumes.

And that's a problem.

Because here's the thing: the people with the social clues are not going to beat down the door to shove them down your throat.

🧵

siderea,

The people with these clues – the people who've been involved in social media governance before, the social scientists, the people who do evaluation of social programs, the urban planners and the architects, the psychological professionals – they're going to do the same things they have always done. Write news articles and blog posts and social media threads, give talks conferences and conventions, teach classes and attend symposia and colloquia, conduct scientific experiments and publish in research journals, and generally cast their messages in bottles upon the seas of information in the hopes that they will fetch up on the shores of those who would benefit by them.

They're not going to come and staple these clues to your forehead for you.

If you want them, you're going to have to go get them.

🧵

siderea,

And nobody goes to get these resources, nobody seeks out this expertise, who does not move past the naive arrogance of assuming there's nothing that they need to learn, that they have this social media platform thing all figured out already.

What I am hoping to achieve by this thread is to tantalize you with the evidence that there are things to know which you probably don't yet know, but would, if you only knew, like very much to know. Things that would benefit you to know.

I am hoping to enlist your curiosity in tackling to the floor the assumption you might harbor in your breast that this social media thing really isn't all that hard, you just do it in the right way, and which way is the right one is really obvious.

I am attempting to entice you with the knowledge that is out there (and insofar as there are experts in these things here on Mastodon, in here with us) into wanting that knowledge enough to go looking for it.

And to ask for it.

🧵

siderea,

All of you.

This message isn't just for "the people in charge".

For one thing, this is a federated system. This means the odds that you, personally, might be a "person in charge" in some sense go way, way up. And if not today, maybe tomorrow.

Furthermore, some instances are straight up democracies. Everybody on them is a person in charge.

But much more importantly, you have a voice. And if you're on Mastodon you probably use it.

I am hoping to convince you to use that voice not just to call for remedies you are certain will work to solve problems you haven't really specified. I'm inviting you to engage with curiosity questions like, "I wonder what the pros and cons of this are?" and "I wonder what it is that is giving me that impression, and how I might check out whether or not it is true?" and "I wonder if there are any other social media platforms that came up with a solution to this?"

🧵

siderea,

I'd like to encourage you to use your voice to ask questions, like "Are we using the same definition of 'unacceptable behavior'?" and "What problem do you see your suggestion fixing?" and "What are the examples you are imagining or remembering when you make that suggestion?"

And I'd also like to encourage you not to use your voice – to remember to listen, to observe, to contemplate. You may have heard that old saying about communication being a two-way street: that is in fact false. Communication is a narrow one-way bridge that traffic has to take turns crossing. If you're sending, you're not receiving. If the teacup of your comprehension seems too full of your prior understandings too preciously savored, nobody else is going to pour you a serving from their own teapot.

Like I said, nobody's going to come and force clues down your throat. If you seem not to want them, you won't be given them.

🧵

siderea,

The most interesting thing we can do with our voices is to open up spaces of discourse. And that's what I would most like to enlist you in.

I would love to see emerge on Mastodon, finally at long last, a discourse about social media platforms that centers social engineering as an actual thing. That is based on the ideas that we should be curious how things actually work and check our hypotheses against reality, that we should take responsibility for how our virtual built environment shapes the society that flows through it and the society in which it reposes, that there are things worth knowing here, things worth finding out, and interesting people to talk to.

Fundamentally I am hoping to interest you in the entire topic of social engineering, in this new sense that I mean it. Because if you are interested in it and if you talk about it with other people who are interested in it, and interest in it grows, then it will feed on itself and it will grow as a field.

🧵

siderea,

And if it grows as an idea, if it grows as a topic of interest, it will connect up people, it will enlist more people, it will drive curiosity, and discovery, and experimentation, and design.

And it will change the culture in which the people who do make the decisions make them. It will make it a culture in which it is normal to consider questions like "How will we know if we succeed at solving this problem?" and "What prior art is there in this problem space?" and "What sorts of challenges are outstanding problems in the field right now?" and "What is the effect of our platform on the larger society its members belong to?"

It seems likely to me such a social environment will have beneficial effects on the social media platforms that emerge from it.

But, I confess, I haven't operationalized that yet.

(Fin)

cenobyte,
@cenobyte@mastodon.thirring.org avatar

@siderea Thank you so much for taking the time to write this. There's alot to unpack. I'm an admin as well so lots to think about here.

rimu,
@rimu@mastodon.nzoss.nz avatar

@siderea Thank you so much for this great thread! You've given me a lot to think about.

I have been moderating all kinds of online spaces since the early 00s. Forums, FB groups, wikis, etc etc. All of that experience will inform the construction of the moderation tools on the fediverse platform I'm building (https://join.piefed.social/).

Your thread helped me get more conscious about the bigger picture of those tools, the "social engineering" that is going on.

irenes,
@irenes@mastodon.social avatar
siderea,

@irenes Oooh, thanks! I will check that out!

KatS,
@KatS@chaosfem.tw avatar

@siderea
I, for one, would love to read such a body of lore.
I'm a solo developer who forsees this kind of problem on a grand scale if their product has a large userbase, and would very much like a starter-pack of clues.
I'm also interested in helping with Fediverse tooling to address these issues, but don't want to be yet another heart-sink "here we go again" newbie that somebody has to coach through it.

My handicap is that it all seems to be stored in the heads of those with the experience, and/or distributed through chat logs. Are there sites or documents that people like me can learn from, without adding yet more to the mental load of those with the experience?

trisweb,
@trisweb@m.trisweb.com avatar

@siderea have you heard of User Experience Design or HCI? They don’t get respected as much as they ought to, but they do at least exist.

dashdsrdash,

@siderea

The Fediverse/Mastodon has a mechanic which I don't think has ever popped up in social media before*: defederation of independent instances.

Right now I can see that my home here, tilde.zone, bans about 120 other servers for listed reasons.

Will we see a net.split that turns Mastodon into analogs of DW and LJ, while maintaining tenuous links? More than two major networks? Has it already happened?

Graphing this is going to be tricky, but doable, I think.

Much brainfood, thank you.

siderea,

@dashdsrdash

> Has it already happened?

Twice.

I'll see about digging you up the paper I read on this.

I've asked around about necessary prerequisites for building just such a social graph for exactly this reason, and that information is largely not public for security reasons.

siderea,

@dashdsrdash Okay, found it. Content warning: the crux issue was CP, and the person who wrote this blog post is pretty clearly identified with political positions that, let us say, you do not share.

2017 Apr 3: Ansuz (personal blog): "Mastodon WTF timeline" by Matthew Skala
https://ansuz.sooke.bc.ca/entry/335

KitMuse,
@KitMuse@eponaauthor.social avatar

@siderea As someone around during the transfer of ownership for LJ, it was shocking to behold. Not only that, but it opened up how the Russian goverment was using LJ to crack down on dissidents, as well as the privacy and security issues that brought, and it's an excellent example that many may not be aware of.

I was on LJ for Buffy fandom and it seemed like each fandom had its own thoughts on the situation too, which didn't help matters.

AmbularD,
@AmbularD@fanglitch.space avatar

@KitMuse @siderea I was on LJ for assorted fandoms at the time too, and although I don't now recall at what point in the sequence of events this happened, I do remember there was a series of DDoS attacks which disrupted or took the platform offline entirely for hours or sometimes days at a time. People in my corner of the English-speaking side were absolutely furious when we found out that the attacks were aimed at political bloggers over on the Russian-speaking side. I recall wondering (bear in mind this was close to 20 years and a lot of learning ago) why the staff didn't just kick the offending bloggers off or move them to a separate server where their activities wouldn't ruin LJ for everyone else.

siderea,

@AmbularD Thank you for sharing that example. It's kind of perfect.

@KitMuse

tknarr,
@tknarr@mstdn.social avatar

@siderea
There's something else that makes a difference too: the distributed nature of the platform. On a centralized platform, you can't go elsewhere without leaving the platform entirely. On a distributed platform, you can go somewhere else (to a different instance) without leaving the platform or losing your connections. Any instance that allows griefers and trolls free rein will find everyone else going elsewhere.

siderea,

@tknarr This is the kind of comment that makes me dislike Mastodon.

Look, you have the opportunity to learn something new. Somebody is saying something that you haven't already heard. But instead, you're more interested in talking about what you are already certain about.

The jingoistic pro-Mastodon nonsense you are spouting is garbage, and I feel, about its making its appearance in my space, about the same I would if somebody threw trash in my wading pool.

mainec,
@mainec@fromm.social avatar

@siderea I really appreciate your thread. In particular as it is much easier to think about remedies while it's still quite as opposed to when the storm starts.

Things I would add to the list: People, trying to game the system to spread their knowledge are not only attracted by the size of the platform but also the influence people have elsewhere, think journalists or polititions active here.

BillySmith,
@BillySmith@social.coop avatar

@siderea

When the internet was first taking off, you had to have a university connection to access it, which meant that all of the policing of the trolling was taking place outside of the internet, and in the larger institutions.

When a university Sysadmin is told "Sort these people out, or we will disconnect the whole university from the internet!", then the individuals causing the problems, get dealt with very quickly.

krupo,

@siderea of course Usenet had the alt.* Hierarchy which was coined as being the home of "anarchists, lunatics, and terrorists" which I always found amusingly self-deprecating but in the lens of your posts, you can feel it coming across as a lot more judgy

lwriemen,

@siderea I wonder how much science was linked to the Omni article. It's been a long time assertion of fans of Windows, but I haven't seen anyone link to studies that give size a larger correlation than Windows' inherent lack of security.

siderea,

@lwriemen Oh that wasn't from the Omni article. That was an absolute commonplace in IT back when I was doing support before I became a full-time developer, so 1990s.

The other possibility I'll note, back then, Macs were considerably more expensive than the equivalent PCs, so insofar as virus writing was the sport of disaffected hoodlums, they needed to be well bankrolled disaffected hoodlums to apply their trade on Mac 7.5 or whatever.

Crell,
@Crell@phpc.social avatar

@siderea Please copy this brilliant thread to a blog post or article somewhere so it's easier to link back to later. It is too good to get lost to the backscroll ether.

Crell,
@Crell@phpc.social avatar

@siderea
I've been moderating/governing online spaces for 25 years, and I've come to one clear conclusion: You will never make a space comfortable for everyone. It will not happen. Accept it, and decide explicitly what kind of person you want to feel "safe" and which you don't.

And anyone that is outside the Overton Window you want to have, exclude fast, exclude hard. (cf the "Nazi bar" story that's gone around, but for much milder things than Nazis.)

Crell,
@Crell@phpc.social avatar

@siderea More mundane example: If you want a Star Trek fan forum, cool. If someone keeps starting threads about Game of Thrones instead... they don't belong there. Really, they simply need to leave. Not because GoT is bad, it's just Not Appropriate Here(tm).

This of course gets much harder at scale, as you note. Which is why mega-scale networks... maybe aren't even the right model in the first place.

siderea,

@Crell Oh, hard agree! I actually have two other rants in me illuminating exactly this.

The issue isn't mega-scale networks, it's whether or not there are boundaries that demarcate topic spaces. In THIS space we discuss Star Trek, and in THAT space we discuss Game Of Thrones. Both spaces can be on the same network or even on the same server, but it has to be possible for the people who want to see only Star Trek to not see any GoT, and the people who want to see only GoT to not see any ST.

Not because either is offensive to the other, but because neither group will feel that it is in control of its experience unless that is true, and they are likely to start treating the others as interlopers and get defensive and hostile.

siderea,

@Crell it's like Frost wrote: Good fences make good neighbors.

Crell,
@Crell@phpc.social avatar

@siderea And sometimes you want a common identity in both the Trek and GoT areas, and sometimes you don't. (Eg, professionally-related topic and your BDSM topics.)

In some ways, Discord has gotten this most-right so far. (Or least-wrong.) (Common login, you're "in" each server at once, but can only see shared servers, not people's non-shared servers, lots of mod configurations, bot support, etc.) There's probably something to learn from that.

siderea,

@Crell That is the plan! Maybe it'll be up by tomorrow? I'll try to get back to you with a link when I do.

mapto,
@mapto@qoto.org avatar

@siderea @Crell post it here please. Thank you for this huge work!

mapto,
@mapto@qoto.org avatar

@siderea @Crell did you see this? Already making waves in the Italian community. https://mastodon.social/@fedifaschifo/111488849240369217
PS: translator's nickname/handle means "the fediverse sux," literally

siderea,

@mapto

🤣 I'm getting the notifications, but I really appreciate you pointing it out for me, and, no, I hadn't known that about the username. Love it.

@Crell

mapto,
@mapto@qoto.org avatar

@siderea @Crell well, I do have to apologise for all the unintelligible Italian that you'll keep getting for a while. But hopefully the fact that this is making an impact should compensate the annoyance.

siderea,

@mapto No apology necessary! I can usually sort out the big words from English cognates and my very rusty Latin, also I am exercising the hell out of Google translate.

@Crell

virtuous_sloth,
@virtuous_sloth@cosocial.ca avatar

@siderea @Gargron

Do you recall how Google+ had circles and you could limit visibility of posts and the ability to share posts at the level of a circle or a circle plus one relation removed (I think?).

siderea,

@virtuous_sloth

Sure. What about it? What I most remember is how at the time it was widely seen as a pale shadow of LiveJournal's model (still being used at Dreamwidth) by those of us who had experience of both, though I don't remember the details of where G+'s circles fell down.

To this day, no platform has come up with a better Access Control Lists (ACLs) model than LJ.

(Now, I think I can improve on it! I have some ideas. That's a separate rant.)

But if you're looking to make suggestions for Mastodon to adopt an ACLs model, you might want to leverage the LJ model as an example (if one does not want to access LJ because of various good reasons people don't want to access LJ, Dreamwidth is still right there.)

Now, all that said, there's some interesting sociotechnical reasons that Federation makes ACLs very hard.

dabblecode,
@dabblecode@techhub.social avatar

@siderea you mention social engineering is a term already in use for other purposes. I kept thinking what you describe sounds more like an online extension of civil engineering, which (as I understand it at least) is concerned with building and maintaining environments for people to coexist successfully (safely, prouctively)

siderea,

@dabblecode Oh, no. (Heh. Source: was briefly a civil engineering major, once upon a time.)

Civil engineering is entirely about matter, building with it, and keeping it from falling down once it's been built up. It's not at all concerned with the social-behavioral effect of what it builds on the people that encounter it, except that it is not supposed to kill people, except when it is.

The social-behavioral stuff? That's "design", and that's the architect's job. And then the architect hires the civil engineer to figure out how to put up the thing they designed so it doesn't fall down.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • DreamBathrooms
  • magazineikmin
  • thenastyranch
  • ngwrru68w68
  • khanakhh
  • rosin
  • Youngstown
  • slotface
  • cisconetworking
  • modclub
  • kavyap
  • ethstaker
  • everett
  • Durango
  • megavids
  • InstantRegret
  • tacticalgear
  • osvaldo12
  • mdbf
  • normalnudes
  • cubers
  • Leos
  • GTA5RPClips
  • tester
  • anitta
  • provamag3
  • JUstTest
  • lostlight
  • All magazines