thisismissem,
@thisismissem@hachyderm.io avatar

It always amazes me in how in discussions of the future of services that moderation is generally an afterthought.

Okay, we add nomadic identity, how does that impact the ability for servers to moderate, given all moderation tooling is built on user + server identity?

Okay, we add the ability to put people on lists without following, how does that impact moderation and privacy?

codesmith,

@thisismissem Do I understand correctly that your point is not that we shouldn't consider new features/directions, but that discussions around new features/directions should inherently include discussions on how that affects moderation and privacy and what new tools might be needed in those areas?

thisismissem,
@thisismissem@hachyderm.io avatar

@codesmith yes, exactly.

Adding new features or taking a new direction is great, but we must make considering how that impacts moderation, trust & safety, and privacy mandatory. If we don't, then we'll end up potentially doing more harm than good.

codesmith,

@thisismissem Fully agree!

volkris,

@thisismissem

But many of us would rather agency over instance moderation.

I’m always focused on giving users as much control as possible over their own experiences, so shifting the balance to them and away from instance moderation is inline with my preferences.

Different people have different preferences in that question, but for a lot of us, such a shift is what we’d rather.

daphnestar,

@thisismissem This sort of thing drives me nuts. I've never moderated an instance or contributed to its codebase in any capacity, so my observations come with the "this may only be true in a very limited sense" warning, but...

It feels like a lot of people view the task of building a social network to be a solely technical endeavor, with no consideration for how any particular feature or system affects the social dynamic. Or maybe it's just a few admins and contributors that feel that way, but it seems to permeate the experience of using the network. sigh I don't know. I see people bouncing off Mastodon and leaving for BlueSky and I get pretty sad about it. Sorry for venting in your direction.

wago,

Before facebook i built a social networking site for models and photographers. It had about 30,000 accounts as I remember. But people didn't really post links and text content, mostly photos so it was faster to moderate. There were two full time moderators and two who could help as needed. Every post was human reviewed before being published. However, an account could be marked for auto approve, like if the user had established trust. Not a lot of issues but some freaks and people harassing models. There was a producer for a major tv network show convincing models they could be on tv, getting their phone # then making sick twisted calls to them at all hours. When i let him know i knew his identity and what he was doing he quit. It would have been big news story :)

thisismissem,
@thisismissem@hachyderm.io avatar

@wago some things resolve like this, but many others don't. This also feels like mansplaining in a weird way too, ngl. I've my own experiences as a moderator and developer of sites.

Human moderation for every post works until a point, when there's 1.5 million MAUs and 10 million register, with 400 million to 1 billion posts a month, human review prior to posting doesn't really work.

Anyway, this is just saying "consider moderation and trust & safety in designing or rolling out tech"

mike,

Reality check.

If your permissions/moderation system doesn't work because of something/anything that somebody else does, then your permissions/moderation system didn't work to begin with.

thisismissem,
@thisismissem@hachyderm.io avatar

@mike mm, but so far being able to moderate on the combination of (identity + server) has been the started, if you propose to do differently, then you need to look at the moderation impacts and figure out solutions — we can't just go "I broke an existing expectation, it's your problem", even if you disagree with how the system worked originally

thisismissem,
@thisismissem@hachyderm.io avatar

@mike also, this isn't a post against nomadic identity, just that we need to think through the implications of this change & how we can potentially mitigate the negative impacts that may exist.

Technology does not live in a vacuum, and I know you know that.

mike,

Let's take nomadic identity out of the equation. Here's the problem... user + server does not have a one-to-one mapping with individual humans. Never did, never will. I have test accounts on a bunch of fediverse servers. So do lots of other folks. This is not a new phenomenon. I had a dozen email addresses in 1985 for testing SMTP and USENET on the ARPANET. If somebody's moderation tooling is dependent on this 1:1 mapping to work correctly, it not only doesn't work now - it never worked, regardless of whether nomadic identity exists in the fediverse.

thisismissem,
@thisismissem@hachyderm.io avatar

@mike Once again, not against nomadic identity, in fact I think it should be good; I just wanna make sure we can evolve moderation tooling before it launches / gets wide adoption.

Yes, that may require rethinking approaches to moderation, and the people most likely to be able to advise are those most familiar with nomadic identity and how that could impact moderation activities. I've so far never heard of anyone talking about how nomadic identity fits into the predominant view of the fediverse.

thisismissem,
@thisismissem@hachyderm.io avatar

@mike that is, I've seen a bunch of people celebrating that nomadic identity may make them free from moderation, and potentially consequences of their actions, which seems like it'd be dangerous thing to have happen.

That's why I'm saying that all improvements to the fediverse must consider the pros & cons of the change in relation to moderation and trust & safety. (Also why we may need to re-examine past features)

SoniEx2,
@SoniEx2@chaos.social avatar

@thisismissem we thought the point of nomadic identity was about losing the ability to control your users?

like the impact to moderation is the point.

thisismissem,
@thisismissem@hachyderm.io avatar

@SoniEx2 right, but it's not just about being able to control your users, but also, being able to control who can interact with you.

If I block you, then you should stay blocked. Tying user identity to their instance and making those not mutable means there's a reasonable barrier (you need to create a new account, potentially on a new instance, to evade the block and continue abuse/harassment)

SoniEx2,
@SoniEx2@chaos.social avatar

@thisismissem wouldn't that follow from the identity no longer being tied to any specific instance? you just block whole identities, not specific accounts. you block all the alts.

thisismissem,
@thisismissem@hachyderm.io avatar

@SoniEx2 well, how do you block the identity, if that's totally fluid? Would you block say, the DID identifier, and that'd block the “account” for that identity? (if DIDs are even used for this)

SoniEx2,
@SoniEx2@chaos.social avatar

@thisismissem oh wait we're making assumptions. sorry.

uh we mean do you genuinely expect ppl to do key management? (lmao)

our expectation for a system of nomadic identity is that ppl would still need to make accounts on additional servers, and then connect them together. so if you block any account it blocks all of the others. this avoids key management problems and ideally it'd rescan them periodically to block new alts.

more accurately we'd call this multihoming.

SoniEx2,
@SoniEx2@chaos.social avatar

@thisismissem on top of multihoming, it would be possible to implement opaque federation (a term we came up with), which is a way to connect identities while sidestepping admins/instances. this would be opt-in-only, basically a replacement for having mutuals.

SoniEx2,
@SoniEx2@chaos.social avatar

@thisismissem personally we genuinely think making ppl think about key management would be bad tho. we would much rather see nomadic identity systems that don't rely strongly on key management, even if they are, in some ways, cryptographically weaker.

thisismissem,
@thisismissem@hachyderm.io avatar

@SoniEx2 oh, to clarify, I only used DIDs as an example, I've no opinion about how nomadic identity should be implemented, just that we need to make sure we can still effectively moderate given nomadic identity.

So, currently the tooling is all centred around account + server identity, so for instance, how would something like domain blocks work in nomadic identity? Would there be something comparable?

SoniEx2,
@SoniEx2@chaos.social avatar

@thisismissem you can domain block. it blocks the instance/domain. it doesn't inherently block the accounts hosted by that domain.

we like to think this is the entire point of nomadic identity/opaque federation. anything else is guilt by association as a service.

if someone has alts on a bunch of shitty domains, and you block all of those shitty domains, you do not need to block the account too.

thisismissem,
@thisismissem@hachyderm.io avatar

@SoniEx2 right, so, currently blocking a domain implicitly blocks all the accounts too, which is what you'd expect from a trust & safety perspective.

I'm not sure what you really achieve by blocking a domain if that doesn't block all that domain's accounts too?

SoniEx2,
@SoniEx2@chaos.social avatar

@thisismissem well, we guess more specifically it does block the accounts, but not the identities associated with those accounts.

if you want to block every identity to ever have touched a given domain, it's mostly possible: simply make every domain sign the identity, and ask the identity holder to please provide every signature. ultimately you can't force the identity holder to provide all of them.

and we like to think that's the point.

SoniEx2,
@SoniEx2@chaos.social avatar

@thisismissem nomadic identity isn't meant to simplify moderation. it's meant to stop the moderators/admins from being able to control the users.

the admin controls the content on their instance, the user controls the content on their feed, and these should be independent things without too much influence on each other.

thisismissem,
@thisismissem@hachyderm.io avatar

@SoniEx2 right, but as your feed is server by their instance, they can also say “yeah, nah, you can request we serve you CSAM” just because you followed an account posting CSAM, right? (it may sound like an extreme example, but this is one area where domain and instance-level user blocks are employed)

Basically, if we introduce nomadic identity, we should have the same or greater ability to ensure trust & safety as we do currently.

SoniEx2,
@SoniEx2@chaos.social avatar

@thisismissem sorry, can you clarify the parties involved? (call them A, B, C, etc)

thisismissem,
@thisismissem@hachyderm.io avatar

@SoniEx2 okay, you have Identity A, you use Server XYZ.social, you follow Identity B. Identity B starts posting CSAM. XYZ.social says “screw that" and wants to prevent Identity B from distributing CSAM through XYZ.social (as XYZ.social caches a copy of Identity B's content for privacy reasons, rather than serving directly from where ever Identity B posted it).

thisismissem,
@thisismissem@hachyderm.io avatar

@SoniEx2 Currently, if Identity B is on XYZ.social, then they can block the account & IP address. If Identity B is not on XYZ.social, they can either block Identity B on their instance (say, ABC.social) or they can decide “nah, ABC.social is too much of a risk to be associated with" and block ABC.social entirely.

SoniEx2,
@SoniEx2@chaos.social avatar

@thisismissem yes so instead of "Identity" you have "Account", but in practice suspending the identity would suspend all known accounts (because it seems silly to do otherwise).

meanwhile, suspending the instance would not automatically block every other identity associated with that instance, only the accounts.

since identities are supposed to be flexible, you have no way of knowing if new accounts (after the suspend) are new alts, but that's the same as the status quo.

thisismissem,
@thisismissem@hachyderm.io avatar

@SoniEx2 right, so I think we'd have a regression in the ability to moderate and handle trust and safety matters.

Maybe there's an argument to be made for that regression, i.e., the benefits of nomadic identity outweight the costs to the ability to effectively moderate. I'm not sure. I don't think I agree with that assertion.

SoniEx2,
@SoniEx2@chaos.social avatar

@thisismissem how so? we don't see a regression. why's the status quo a regression?

SoniEx2,
@SoniEx2@chaos.social avatar

@thisismissem (technically it's slightly better than the status quo - specifically around account suspends, because it catches all known (at the time of the suspend) alts.)

thisismissem,
@thisismissem@hachyderm.io avatar

@SoniEx2 I mean, the whole notion of a regression is that something doesn't work like it used to work. Maybe the status quo is entirely wrong here, maybe it's actually acceptable that it doesn't work like it was previously working, but that is something that will require all software developers to acknowledge and build for, and for moderation teams to retrain for. That has a cost associated with it.

SoniEx2,
@SoniEx2@chaos.social avatar

@thisismissem so basically making account suspends slightly more effective is a regression simply because it's different...?

SoniEx2,
@SoniEx2@chaos.social avatar

@thisismissem anyway, nomadic identity revolves around the idea that your users should have more options to deal with your moderation decisions they find objectionable.

but it's primarily a client-side solution. we'd even argue your server is not supposed to know about it.

SoniEx2,
@SoniEx2@chaos.social avatar

@thisismissem we see your point wasn't about nomadic identity, but about adding new features without considering moderation. which, fair, and definitely something that we too feel applies to lists without following. sorry.

but nomadic identity is the worst possible example, because it has everything to do with moderation. ultimately mods should prevent harassment and keep the server out of trouble, but not actively try and ruin our social graph. mods should have agency, not control.

thisismissem,
@thisismissem@hachyderm.io avatar

@SoniEx2 those were just two things I'd recently seen where I went "oh, gosh, I wonder if they've thought about how this would impact trust and safety?"

The point is that everything has pro's and con's when it comes to different trade-offs that those ideas have, and I'm just stressing that we really need to be evaluating these ideas within a framework that factors in trust and safety, that's all.

jdavidnet,
@jdavidnet@me.dm avatar

@thisismissem but is t it really identical + server + credentials + validation?

On my mastodon server I can provide rel links to secondary identity providers and then I have to prove I am able to modify those servers/ html pages. Each of those rel links are validated credentials. By linking to high value secondary credentials it should be harder to make “strong” and “fleeting / spam” accounts. Maybe accounts that lack 2ndary strong “rel”s should get les trust / privileges?

thisismissem,
@thisismissem@hachyderm.io avatar

@jdavidnet that's not necessarily a good option because many people in society don't have a web presence they control outside of their social media.

But this isn't just about spam, it's also about abuse & harassment, and how new practices may break current implementations or assumptions.

jdavidnet,
@jdavidnet@me.dm avatar

@thisismissem people don’t need to control the full html / server, other servers / sites could serve as credentials like “LinkedIn” or “Veritas” or new services.

Each site could have credential processes and behavioral rules.

Federated identity is going to need to use a diaspora of services to build up a validated identity, I don’t see a way around it.

thisismissem,
@thisismissem@hachyderm.io avatar

@jdavidnet that'd then only be as good as those sites (which are centralised and very large online platforms) moderation.

But you seem to still be thinking technical solutions to social issues.

jdavidnet,
@jdavidnet@me.dm avatar

@thisismissem yeah, I’m sorry, I guess we are focused on a different parts of the issue.

I was a social media software engineer, and have have worked at hi5, & MySpace & other social media startups. I’ve hated Facebook and “centralized” social identity from the beginning and fought hard as an engineer for 10+ years against it.

Mastodon is our real fist shot at getting away from the centralization, and it seems like the has a real shot here, from a technical standpoint.

jdavidnet,
@jdavidnet@me.dm avatar

@thisismissem I see the social issues to “follow” technology & engagement algorithms are in my opinion the biggest problem.

The doesn’t seem to have this problem. By contrast (ad) platforms like Facebook, X(twitter) need to feed the wrong behaviors to increase ad impressions, and track an “interest graph” to micro target ads to increase profits.

doesn’t need this as communities run servers, or people pay for hosting, so you don’t have to fight the algorithms

jdavidnet,
@jdavidnet@me.dm avatar

@thisismissem not having to fight the algorithms that feed hostility will give the time to adapt to the other issues, patterns, and best practices that need to develop over time.

I can see the possibility for people to “validate” their profiles against 3rd party services or say “behavior guilds” that host sites.

I imagine these to be like professional orgs, that have oaths and ethical standards that people are judged against.

jdavidnet,
@jdavidnet@me.dm avatar

@thisismissem for instance imagine a journalist guild that hosts a site, & then you create an account there & agree to a TOS, take an Oath, & agree to ethical standards of journalism & reporting.

& as long as your dues and behavior passes muster, they offer up a “rel” tag to validate your mastodon profile, & as such you get a “journalism guild” check mark (of sorts).

Over time people will see, this account is in good standing with the “journalist guild” so they can be trusted.

jdavidnet,
@jdavidnet@me.dm avatar

@thisismissem maybe you will want to validate your age, constituency, profession, or some other value.

10years from now you might have a list of 10 credentials and orgs that validate your being and behavior.

Maybe you will want to secretly validate some of these things, so maybe some validation will say require “adults” but doesn’t need to know your age.

I’ve been thinking of building such a service, but wasn’t sure how to pay the rent with it.

thisismissem,
@thisismissem@hachyderm.io avatar

@jdavidnet I'm not saying I disagree with link verification, or link verification services, I'm just saying that that can't be the extent of the thinking around trust and safety.

You can't make depending on a third-party the key foundation of your trust and safety model. Third parties disappear all the time. So we need to holistically look at how changes may affect trust and safety tooling and existing approaches to moderation.

jdavidnet,
@jdavidnet@me.dm avatar

@thisismissem sure but there is no reason in the that you wouldn’t have redundant overlapping validation services.

Why not participate in multiple guilds?

But I’d like to hear any solution problem pairs you have in mind.

thisismissem,
@thisismissem@hachyderm.io avatar

@jdavidnet you really aren't understanding what I'm saying.. We can't have just one thing be the crux of the trust & safety approach. You also can't just break the entire existing mechanism for trust and safety which is server/account identity based.

It might solve one issue (spam), but it could create additional problems (e.g., what prevents spammers from just running a service that acts as a "trust” participant by hosting a service that verifies all their spam accounts as “real" accounts?)

jdavidnet,
@jdavidnet@me.dm avatar

@thisismissem because those “spam” domains will have no trust or social value

It’s not just one layer of trust, it’s layers of trust.

Built correctly, each verifying rel site would be SSL, & require a high quality certificate from a first party SSL verifier that validates identity, org-creation, address, and payment. Some SSL certs require even more validation. SSL only prevents spoofing.

So, then clients, apps, and other layers would still assess the quality of each domain

thisismissem,
@thisismissem@hachyderm.io avatar

@jdavidnet okay, so now you're deferring trust to the SSL provider, how would letencrypt be treated here? Lots of legitimate sites use it, and it's free.

jdavidnet,
@jdavidnet@me.dm avatar

@thisismissem we’ll that’s up to the other layers of the system.

Aggregators and Tastemakers, guilds, communities, platforms, etc. would make their own decisions on self signed certs, just like any cross platform comms do now.

Self-sign certs don’t validate much but they are anti-spoofing. I wouldn’t transact money on a self signed certificate, and browsers and CCs won’t let it happen either.

jdavidnet,
@jdavidnet@me.dm avatar

@thisismissem
No reason an individual or small group wouldn’t self sign or let’s encrypt.

However, serious groups, communities, guilds, & corps will want a better cert. especially if they plan to proxy validate roles, or other certifications.

If the Screen Actors Guild or IMDB had a “rel” server they would want to have a “verisign” or other high quality cert. to add a level of authority and authenticity to their server, to further validate they are a real org with a real address

thisismissem,
@thisismissem@hachyderm.io avatar

@jdavidnet mastodon.social is literally using a letsencrypt certificate, and me.dm is using Cloudflare, both are free certificate providers.

jdavidnet,
@jdavidnet@me.dm avatar

@thisismissem but neither of those are provided a validated identity, which is exactly my point.

If I wanted to prove who I was, there has to be a validation chain, & a technical means of doing that.

So the two at scale solutions are SSL or Blockchain, & I think SSL is operating at a larger scale, has a proven track record in this specific problem, and it scales fiscally relative to how much of an identity stake you need.

  • some identities already have high quality SSL Certs
thisismissem,
@thisismissem@hachyderm.io avatar

@jdavidnet so.. you've removed letsencrypt and cloudflare from your device’s certificate trust chain? because according to what you've said those are untrustworthy as they aren't paid for.

Seriously, trying to rely on SSL as an indicator of "trustworthiness" is a terrible idea. (also, what happens if someone MITMs you and messes with the presentation of SSL certificates?)

jdavidnet,
@jdavidnet@me.dm avatar

@thisismissem I’m waiting for you to mention a better idea that can be benchmarked against some other tech at scale.

I see no reason that someone couldn’t use “let’s encrypt”. But if they want to claim they are an identity at NYT, IMDB, GitHub, etc… I sure as hell hope they are not using some random domain on a “let’s encrypt” SSL cert.

But if someone on wants to claim an identity on their blog, hobbyist site, let’s encrypt seems great!

thisismissem,
@thisismissem@hachyderm.io avatar

@jdavidnet so why should a profile on github be more proof than a personal website? Anyone could've created that github profile. Just because github hosts it, doesn't make it inherently trustworthy.

You're looking for a technical solution to a social problem, and that's where you’re going wrong. Anyway, I'm gonna disengage from this conversation because it’s truly going nowhere and you seem to not want to understand what I'm saying.

jdavidnet,
@jdavidnet@me.dm avatar

@thisismissem I’ll try to describe it differently.

If you are trying to prove you are identity x that made project y on GitHub then there is no better way than to host a REL on GitHub.

If you are trying to prove you are identity m on blog n, then let’s encrypt on blog n is fine, but if you are trying to prove you are identity m(x) on blog n, then let’s encrypt isn’t going to cut it.

It might just be impossible to describe this in short form, might require a medium post, etc

thisismissem,
@thisismissem@hachyderm.io avatar

@jdavidnet Yeah.. I'd suggest you write that medium post. Keep in mind, a significant proportion of Mastodon users (new or existing) struggle with basic link verification, let alone whatever it is you're trying to suggest. If you think hinging all of your trust and safety on a feature that a significant proportion of people already have trouble with, I think you're probably going to have major problems with your approach to trust and safety.

jdavidnet,
@jdavidnet@me.dm avatar

@thisismissem I’m not saying all I’m saying this is a scalar trust mechanism.

I also think people and services don’t do much with REL because it’s not useful yet. If we make it useful then people will put the effort in to REL links.

Also it’s super lame you get 4 REL links. You should be able to have as many as you want!

jdavidnet,
@jdavidnet@me.dm avatar

@thisismissem ssl providers validate trillions of dollars of value every day. I’d use it because it works, and it doesn’t have to be expensive at first.

There are cheaper top tier SSL cert providers, but I can’t remember them off of the top of my head right now. I’m sure a search would find many, ranging in cost from $100-$1000.

Which should be a reasonable cost for serious communities. It should also offer sufficient freedom as there are a multitude of providers.

thisismissem,
@thisismissem@hachyderm.io avatar

@jdavidnet my point is that trying to infer trust based on whether or not someone paid for an SSL certificate or not is a terrible idea. It also breaks the whole neutrality aspect of SSL providers — as long as the SSL certificate has an authority you trust in it's certificate chain, then you should trust it.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • fediverse
  • Durango
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • tacticalgear
  • khanakhh
  • Youngstown
  • mdbf
  • slotface
  • rosin
  • everett
  • ngwrru68w68
  • kavyap
  • InstantRegret
  • JUstTest
  • cubers
  • GTA5RPClips
  • cisconetworking
  • ethstaker
  • osvaldo12
  • modclub
  • normalnudes
  • provamag3
  • tester
  • anitta
  • Leos
  • megavids
  • lostlight
  • All magazines