ophiocephalic, to FediPact
@ophiocephalic@kolektiva.social avatar

Observing fedi-folk from various marginalized communities snipe at each other over the past week has been devastating and tragic. No conspiracy theory here, but if there were some nefarious plot to weaken the fediverse, provoking a conflict like this one would be an effective way to go about it.

The purpose of this post isn't to further stir the shit. But it's worth taking a look at origins, alternatives and possible consequences in light of the ongoing threat of authoritarian and capitalist recuperation looming over the fedi.

1/11

ophiocephalic, to FediPact
@ophiocephalic@kolektiva.social avatar

Fediverse Communalism 1

For those interested in the prefiguration of dual power, there is a perfect opportunity right under our noses - the fediverse. Moreover, such praxis may not be so much of a choice, as a necessity. The forces of authoritarian and capitalist recuperation are coming for this network.

So far, it remains largely out of the control radius of corporations, government security services and the fascists poisoning every other online environment. But there are well-resourced elements both without and within working to change that.

Consider the contrast with major capitalist services. This recent story explains how the "U.S." government has attempted to extort a price from TikTok in exchange for allowing it continued operation in the country - its conversion into a domestic mass surveillance tool under the control of state security and military agencies.

https://gizmodo.com/tiktok-cfius-draft-agreement-shows-spying-requests-1850759715

1/20

ophiocephalic,
@ophiocephalic@kolektiva.social avatar

The threat of absorption into the Zuckerberg surveillance entity is a menace, but also an opportunity. However, the prospect itself needs to be reviewed. Opinions are being voiced that Meta will not, in fact, federate at all.

Some of these claims are good-faith efforts to critically regard the well-established propensity of the corporation to lie through its teeth at every opportunity. While true, there are clear and ongoing indications that Meta has both the intention and the motive to proceed.

First, Meta has planted an engineer in the W3C ActivityPub working group. This may be a precursor to custom additions to the spec which could facilitate advertising and behavioral surveillance protocols.

https://thenewstack.io/threads-adopting-activitypub-makes-sense-but-wont-be-easy/ (pro-Meta propaganda)

2/20

ophiocephalic,
@ophiocephalic@kolektiva.social avatar

Next, this post on Threads, by a Meta engineer, indicates a team of at least four working on "fediverse workstream from Threads".

https://www.threads.net/@0xjessel/post/Cv3Jxs0P84d

An additional confirmation of intent is the fraudulent "CSAM-scare" influence operation fronted by Facebook's ex-"Chief Security Officer" and another "Security and Safety" bigwig previously at Facebook. Resources are being expended to mold the network into a semi-centralized, surveilled and shovel-ready data mine suitable for ingestion into the Zuckerverse.

More on the Facebook Mafia behind the influence operation here: https://kolektiva.social/@ophiocephalic/110772380949893619

But fedi-folk are smart, and the July disinformation attack was met with a critical eye. It would be tempting to assume that the community's general dismissal would have been the end of it. That is not the case.

3/20

ophiocephalic,
@ophiocephalic@kolektiva.social avatar

At this link, you can read the transcript, or listen to an audio recording, of an early August meeting attended by the Stanford operatives, a co-author of the ActivityPub standard, and a prominent fediverse developer, as they discuss their plans to impose centralized algorithmic surveillance onto the network.

https://github.com/swicg/meetings/tree/main/2023-08-04

Here's more on one of the surveillance systems under consideration, a technology controlled by Microsoft, which detects and auto-reports both CSAM and political subversion: https://kolektiva.social/@ophiocephalic/110782738969976772

4/20

ophiocephalic,
@ophiocephalic@kolektiva.social avatar

Now to the question of motives. This has also been called into question. Why, after an early July Threads launch with over 100 million "signups" (all of which carried over from existing Instagram accounts), would Meta care about our puny little nothing of a network?

According to one analytics firm, that usage number had dwindled to 576,000 by early August.

https://www.similarweb.com/blog/insights/social-media-news/threads-first-month/

Suddenly the fedi, with an estimated DAU of 1.8 million, doesn't seem so puny. But beyond any question of numbers, there is a crystal clear benefit to federation, one which would fit a well-worn pattern for the Zuckerberg entity - openwashing.

5/20

ophiocephalic,
@ophiocephalic@kolektiva.social avatar

A recent paper by @Mer__edith , @davidthewid and Sarah Myers West discusses the openwashing ploy utilized by Meta and other tech giants, in which "AI" and other exploitative technologies are obfuscated by a thin veneer of democratization.

Paper: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4543807
Fedi thread: https://mastodon.world/

@pluralistic riffs further on the openwashing concept: https://pluralistic.net/2023/08/18/openwashing/#you-keep-using-that-word-i-do-not-think-it-means-what-you-think-it-means

Meta currently has a big problem with regulators - particularly in the EU - demanding more interoperability from their social media operations. ActivityPub federation with their throwaway Threads side-project buys them a low-cost, low-risk figleaf. Perhaps we can term this particular variant "interop-washing"?

6/20

ophiocephalic,
@ophiocephalic@kolektiva.social avatar

Meta will federate for the same reason that Google pays Mozilla tens of millions a year to keep Firefox alive. But that comparison only takes us so far, because in this case, it's more like Google dishing out the money only on the condition that Firefox disables ad-blocking and sends telemetry to Google.

So, we have multiple, recent and ongoing indicators. We have motives and strategies which fit a type. Every signal beams in the same direction, and there are none which contradict it. Meta is coming.

And the ActivityPub protocol and major fediverse development projects are firmly under the control of facilitators who are smoothing the way. This is a blog post by one of the primary Mastodon developers, with a proposal to add in backend hooks for the algorithmic surveillance and telemetry collection demanded by the Facebook Mafia.

https://renchap.com/blog/post/evolving_mastodon_trust_and_safety/

7/20

ophiocephalic,
@ophiocephalic@kolektiva.social avatar

It would be a mistake to interpret the current lull in interest, and the odd nature of the Threads launch, as reasons to relax. On the contrary, this is the perfect moment to act on the protective impulses that engendered the FediPact.

Moreover, the push towards centralization, surveillance and algorithmic harvesting by Facebook-linked authoritarians, which is meeting no resistance from those at the top of the development and administrative hierarchy, makes urgent action nothing less than a necessity for everyone of a marginalized or targeted identity, all true believers in FLOSS, and radicals of every stripe.

This is the time to convene, prefigure, and build a Free Fediverse.

8/20

ophiocephalic,
@ophiocephalic@kolektiva.social avatar

Fediverse Communalism 2

How could the communalization of the fediverse manifest tangibly? One idea that pops up over and over from different corners is the organization of instances into alliances. Here is a thread proposing the formation of the fedifam construct.

https://kolektiva.social/@ophiocephalic/110793531238090472

In brief: Instances allied into fedifams could share resources and mutually support each other in many ways, such as:

🐸 A common charter of moderation principles
🐸 Hosting infrastructure and setup support
🐸 A crowdfunding mechanism
🐸 An open-source administration platform
🐸 A commonwealth of blocklists or allow-lists
🐸 A framework for new instance initiatives from within the fedifam to spin up

9/20

ophiocephalic, to FediPact
@ophiocephalic@kolektiva.social avatar

A Free Fediverse beyond surveillance capitalism should prioritize deepening its commitment to decentralization by keeping the maximum user count of its instances small.

This addresses practical needs. Smaller communities are easier to moderate, on a human scale which doesn't involve algorithms or invasive third-party data collection. Smaller communities disperse targets for threat models like spambots, and enhance network resilience. And smaller communities are better at scaling democracy, so that we can avoid being pulled back into the circumstance now plaguing the fediverse of mega-server admins unilaterally imposing their will on everyone else.

However, keeping things small can result in problems of its own. Smaller communities means more people grappling with the complexities of trying to set up, administer, moderate, and - not to mention - fund operations. A system of mutual aid, beyond the current haphazard status quo, is required.

As an approach to solving these problems, and to instilling a ethos of solidarity devoid of the for-profit "monetization" impulse, consider the concept of the fedifam. :fediverso: 👩‍👩‍👧

🧵 1/4

ophiocephalic,
@ophiocephalic@kolektiva.social avatar

The fedifam would be a family or alliance of instances. Communities could align into fedifams based on whatever conditions of identity, philosophy or interest are relevant to them. Instances allied into fedifams could share resources and mutually support each other in many ways, such as:

🐸 A common charter of moderation principles
🐸 Hosting infrastructure and setup support
🐸 A crowdfunding mechanism
🐸 An open-source administration platform
🐸 A commonwealth of blocklists or allow-lists
🐸 A framework for new instance initiatives from within the fedifam to spin up

🧵 2/4

ophiocephalic, to FediPact
@ophiocephalic@kolektiva.social avatar

Those considering the use of the PhotoDNA surveillance process in the fediverse are advised to inform themselves on it first. It doesn't just detect "positives", it can also auto-report them to authorities. Implementing it could turn this network into an automated police-state snitching system. Admins who do not choose auto-reporting may be legally obligated to manually report positives.

Also (independently verified and officially contradicted), PhotoDNA is utilized not just to detect CSAM, but also "terrorist and violent extremist content". So who decides what constitutes this content? Why, the private unaccountable black box which is Microsoft Corporation, that shining beacon of privacy-respect to which a hash of every image uploaded to and federated across enabled instances would be sent. Microsoft indicates the data can be utilized for facial recognition and "AI" ingestion as well.

Do they - or will they - rate advocacy of Palestinian equality and Kurdish feminism as extremist, as governments they do business with would demand? Do they or will they accept the inputs of oil and pipeline corporations in determining what constitutes terrorism? How will they expand the parameters of disallowed sexual deviance if Trump wins the election next year?

Understand that this is already happening. Elements are in the Mastodon Github Issues calling for the addition of PhotoDNA right now. The Pixelfed project may also be adding it very soon. If that is true, antiauthoritarian instances should consider the feasibility of defederation from Pixelfed altogether.

For all those considering this: Stop. Please. All the "report" issued by the Facebook-mafia describes is the fediverse - and in fact, mostly its evil twin, the defediverse - just as it existed last month, last year, years before most on here had ever heard of it. The purpose of their influence operation is scare everyone into turning the fediverse into the kind of policed and surveilled space suitable to Mark Zuckerberg.

Microsoft FAQ on PhotoDNA:
https://www.microsoft.com/en-us/PhotoDNA/FAQ

Microsoft Digital Safety Content Report, indicating use of PhotoDNA to detect extremism:
https://www.microsoft.com/en-us/corporate-responsibility/digital-safety-content-report

Technical analysis of the PhotoDNA process:
https://hackerfactor.com/blog/index.php?archives/931-PhotoDNA-and-Limitations.html

UN report referencing use of PhotoDNA to detect terrorism:
https://www.un.org/counterterrorism/sites/www.un.org.counterterrorism/files/countering-terrorism-online-with-ai-uncct-unicri-report-web.pdf

Transcript of 2018 US Senate hearing on terrorism and social media, referencing use of PhotoDNA:
https://www.govinfo.gov/content/pkg/CHRG-115shrg31316/html/CHRG-115shrg31316.htm

Journalism indicating use of PhotoDNA to detect extremism:
https://www.theatlantic.com/technology/archive/2016/06/a-tool-to-delete-beheading-videos-before-they-even-appear-online/488105/
https://www.scientificamerican.com/article/when-hatred-goes-viral-inside-social-medias-efforts-to-combat-terrorism/

Wikipedia pages:
PhotoDNA https://en.wikipedia.org/wiki/PhotoDNA
Global Internet Forum to Counter Terrorism https://en.wikipedia.org/wiki/Global_Internet_Forum_to_Counter_Terrorism

Screenshot from the Microsoft FAQ on PhotoDNA which reads: The PhotoDNA Cloud Service provides an API which we recommend all customers use to submit reports to the National Center for Missing and Exploited Children (NCMEC). Alternatively, customers may choose to report directly to NCMEC. Customers based outside the USA will need to self-determine other reporting requirements based on local law. To help ensure that the PhotoDNA Cloud Service is used solely for the purposes of preventing the spread of child sexual abuse content and supporting related investigations, customers of the PhotoDNA Cloud Service authorize Microsoft to take steps to monitor and audit their usage of the PhotoDNA Cloud Service. Customers authorize Microsoft to provide aggregate reports to NCMEC that summarize the number of images (matched to signatures of known child pornography images) a customer uploaded on to the PhotoDNA Cloud Service. By using the PhotoDNA Cloud Service, the customer understands that such reports do not relieve them of any legal requirements that may arise during their use of the PhotoDNA Cloud Service, including, but not limited to, any legal obligation a customer has to directly file NCMEC reports. For more information, please refer to the PhotoDNA Cloud Service Terms of Use.

ophiocephalic,
@ophiocephalic@kolektiva.social avatar

Followup to the PhotoDNA post: @dansup has reached out and confirmed that @pixelfed will NOT be enabling PhotoDNA. Thank you Daniel for your communication and your regard for privacy on the fediverse.

There are ideas circulating on decentralized, privacy-respecting ways to better keep everyone safe, ease burdens and liabilities for admins, and keep harmful material out of our timelines. These prospects would require more collective effort than simply signing up for a third-party service, but it's worth it. Our communities are overwhelmingly comprised of good people who aren't hurting anyone. We don't deserve to have algorithmic corporate surveillance inflicted on us.

Viva Pixelfed! :pixelfed:

https://mastodon.social/@dansup/110802145007075499

ophiocephalic, to FediPact
@ophiocephalic@kolektiva.social avatar

Interesting to note that the Stanford "report" references detecting what it calls CG-CSAM - abuse-porn generated by "AI". It's a tidy little racket this mafia runs. First, get rich on "AI" tech that turns all of human culture into a sewer. Then, get richer on the surveillance and censorship tech that their own "reports" proclaim as the only possible corrective to the problems their other tech has caused.

We don't need a control-state of surveillance and algorithmic governance. What we need is to rid ourselves of the dehumanizing technologies forced on us by these parasitic Silicon Valley authoritarians

ophiocephalic, to FediPact
@ophiocephalic@kolektiva.social avatar

When considering the ask to not believe your lyin' eyes, and instead accept that the entire fediverse is a festering cesspit of child abuse, a suggestion to look at who is trying to manufacture consent here, and what else is happening in and to the network in this moment.

The "report" is issued by something called the Stanford Internet Observatory, which is not in fact a telescope on a hill, but rather an operation by the guy who, from 2015-2018, was the "Chief Security Officer" of Facebook - an ironic title, considering that this was the period of the Cambridge Analytica machination, the Rohingya genocide, and the Russian influence operation that exposed 128 million Facebook users to pro-Trump disinformation.

However, belonging to the fail-upwards meritocracy of Silicon Valley means never having to say sorry. He is now an influential voice in a circle advocating the addition of third party data collection and algorithmic analysis into the backend of mainline Mastodon. These developments appear to be progressing rapidly towards implementation.

And by the way, did you notice the name of the "report's" lead author? In fact, this is another influence operation - but one that is being run from Palo Alto rather than Moscow.

ophiocephalic, to FediPact
@ophiocephalic@kolektiva.social avatar

deleted_by_author

  • Loading...
  • ophiocephalic,
    @ophiocephalic@kolektiva.social avatar

    Consider also the prospect of a Free Fediverse comprised of numerous confedis, which could provide answers to the questions we face now in regards to moderation and safety.

    Confedis could form treaties of trust with each other, easing the introduction of new instances into the broader network. This would become especially important if, as some are suggesting, a complete block of Facebook and its collaborator instances requires a switch from blocklists to allow-lists.

    A fediverse of confedis would also enable reform of the development of blocklists. Today, a few hardy souls take matters into their own hands through informal projects, bearing the cost and effort without any guaranteed support from the rest of the network. Others observe the state of play and, rightly or wrongly, perceive a concentration of influence.

    Now imagine a Free Fediverse of confedis. Instance admins within a confedi deliberate together on blocklist/allow-list decisions; then, periodically, a fediverse moderation council convenes with a representative from each confedi, reproducing the democratic process to insure the safety of the entire network.

    🧵 3/4

    ophiocephalic,
    @ophiocephalic@kolektiva.social avatar

    It's urgently important to think creatively on the moderation and blocklist issues. At this very moment, Meta collaborators are in firm control of the Mastodon and ActivityPub development projects, and are proposing schemes to enable third-party data collection, algorithmic moderation processing, and other backend telemetry hooks which could eventually facilitate surveillance, advertising and "AI" harvesting.

    These proposals hinge entirely on the assumptions of the continued growth of a small number of mega-servers, too large to moderate without algorithmic governance, and under requirements to conform to the moderation dictates of the Zuckerberg entity.

    These developments raise the stakes for all of us who affirm our right to nurture and maintain community beyond surveillance capitalism and the growth-at-all-costs pathology. Whatever Meta does next, we simply have no choice now but to develop an alternative paradigm which centers human-scaled communities and the culture and tech which facilitates them. It's not sufficient to simply sign off on the FediPact and wait to see what happens next.

    The confedi concept could provide a way forward; and there are lots of smart people on the fediverse who could help to build out and co-create an idea like this together. Let's collaborate, stretch our imaginations and build a Free Fediverse!

    🧵 4/4

    ophiocephalic, to FediPact
    @ophiocephalic@kolektiva.social avatar

    Yes, there was a fuckup; in fact, a fuckup compounded by another fuckup. But the wellspring of the disaster actually wasn't Kolektiva, it was mastodon-dot-social, that mega-server with hundreds of thousands of silo'ed users, open registration and next-to-no-moderation; that irresistible honeypot for spammers and scammers, that 500-pound gorilla with a bullseye painted on its ass.

    more here: https://kolektiva.social/@ophiocephalic/110707704222855712

    ophiocephalic, to FediPact
    @ophiocephalic@kolektiva.social avatar

    Zuckerberg is not just absorbing certain of the fediverse's communities, but also certain of its technologies. We'll need replacements, but that's an opportunity to break the current state of developmental stagnation in the predominant microblogging service and ActivityPub. And more important still than protocols and apps are those who create them. Essentially, the Facebook Fediverse gets the techbros, but the Free Fediverse gets the catgirls - which means we win!

    more here: https://kolektiva.social/@ophiocephalic/110707704222855712

    ophiocephalic, to FediPact
    @ophiocephalic@kolektiva.social avatar

    Mark Zuckerberg claims he wants Meta’s Threads app to be “friendly.” So far, unmoderated hate speech and misinformation are proliferating.

    The platform is already rife with election misinformation, racism, and anti-LGBTQ bigotry

    Media Matters article:

    https://www.mediamatters.org/facebook/mark-zuckerberg-claims-he-wants-metas-threads-app-be-friendly-so-far-unmoderated-hate

    ophiocephalic, to FediPact
    @ophiocephalic@kolektiva.social avatar

    Decentralization

    Prominent voices advocating for collaboration with the Zuckerberg surveillance entity sure do talk up decentralization a lot, when they're not advocating the subjugation of the fediverse to a single vertical silo of 100 million users. The irony, of course, is that they tend to be admins of instances with tens or even hundreds of thousands. And two of the most prominent control multiple mega-servers, which means they're not just overseeing centralized instances, they're hoarding them.

    In contrast, by default Pixelfed servers are limited to a maximum of 1000 users. Though a deep dive into the parameters can override this, its status as a default is an affirmation of the decentralizing ethos. "Thou shalt keep thy instance small."

    The microblogging space of the fediverse hasn't been allowed to develop an equivalent consciousness, as the agenda has been set by mega-server admins who drove the conversation around topics like "smooth onboarding". But these aren't evil people; the problem is that they have no real vision.

    A comment circulated recently - receipt unfortunately not saved - suggesting that the development of fediverse tools to useful to organizing community would be an effective alternative to the "how to funnel in granny" mentality, because then there would be incentives for entire communities to migrate in together; surely a more holistic view of "onboarding" than fretting over how to pick up confused and wandering individuals one at a time. That is the kind of exercise of technical and social imagination we need.

    To become viable, the Free Fediverse will need to define itself by not just what it stands against - corporate enclosure by the Meta monstrosity - but by what it stands for. Real and actual decentralization - not just shallow lip service towards it - can be one of those foundational values.

    This value can then be encoded into the technology, as it was with Pixelfed; because, let there be no doubt, Zuckerberg is not just absorbing certain of the fediverse's communities, but also certain of its technologies. We'll need replacements, but that's an opportunity to break the current state of developmental stagnation in the predominant microblogging service and ActivityPub. And more important still than protocols and apps are those who create them. Essentially, the Facebook Fediverse gets the techbros, but the Free Fediverse gets the catgirls - which means we win!

    Real decentralization - lots and lots and lots of quite small communities, distinct yet federated - has already proven itself to be a better facilitator of good moderation, and will enable another important value to be addressed shortly. But on the moderation issue, a timely real-world example of why decentralization matters is instructive.

    There has recently been a calamity visited upon our instance, Kolektiva. Among all of the discussion following its disclosure, there was not a full analysis of its chain of causality. Let's take a flyover of the recent timeline.

    April - A massive spambot wave first hits mastodon-dot-social, then spreads quickly through the entire fediverse. Kolektiva, and many other servers, temporarily limit dot-social until the invasion is under control.

    Early May - Another spambot attack hits masto-dot-social, and of course, everyone else. This time, an error is made, and a Kolektiva admin defederates rather than limits dot-social. All Kolektiva users irrecoverably lose their follows and followers from dot-social. There is disquiet.

    Mid-May - In an attempt to restore the lost follow-follower data, a Kolektiva admin recovers a snapshot backup of the database from before the defederation, an operation which occurs with what turns out to be "spectacularly bad timing".

    Receipt: https://kolektiva.social/@admin/110641928258590367

    Yes, there was a fuckup; in fact, a fuckup compounded by another fuckup. But - beyond noting that both mistakes were attempts to do right by the users of the instance - the wellspring of the disaster actually wasn't Kolektiva, but mastodon-dot-social, that mega-server with hundreds of thousands of silo'ed users, open registration and next-to-no-moderation; that irresistible honeypot for spammers and scammers, that 500-pound gorilla with a bullseye painted on its ass.

    The mother of all instances has repeatedly proven itself to be a problem for the rest of the fediverse, as in the examples above, when the admins of literally every other server federated with it were put in the position of having to locally address a crisis not of their origination, each an opportunity to make mistakes they would not otherwise have needed to risk.

    Smaller instances are easier to moderate, larger instances more difficult. And if masto-dot-social is any indication, a large enough instance becomes a lost cause - take a look at dot-social's local feed and see if you agree. Decentralization distributes moderation agency more effectively, both to admins and even to users. And by scattering targets, it creates network resiliency against threats like spambots and crypto scams. Decentralization isn't just a foss-nerd buzzword, it yields tangible benefits for those seeking safer community online.

    1/2

    Threads Will Break Kbin And Lemmy: interoperability isn't just about the protocol

    With the launch of Threads, there's been a lot of interesting talk about the safety and privacy risks it poses to people on the fediverse, if and when Meta begins federating. But it's also worth examining the risks to the social systems....

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • kavyap
  • DreamBathrooms
  • khanakhh
  • magazineikmin
  • InstantRegret
  • tacticalgear
  • thenastyranch
  • Youngstown
  • rosin
  • slotface
  • modclub
  • everett
  • ngwrru68w68
  • anitta
  • Durango
  • osvaldo12
  • normalnudes
  • cubers
  • ethstaker
  • mdbf
  • provamag3
  • GTA5RPClips
  • cisconetworking
  • Leos
  • tester
  • megavids
  • lostlight
  • All magazines