thenexusofprivacy

@thenexusofprivacy@infosec.exchange

A newsletter about #privacy, #technology, #policy, #strategy, and #justice.

Currently at @nexusofprivacy, but looking for a new home and so checking out infosec.exchange

This profile is from a federated server and may be incomplete. Browse more on the original instance.

thisismissem, to random
@thisismissem@hachyderm.io avatar

This was a good read by @thenexusofprivacy on “Blocklists in the Fediverse” — https://privacy.thenexus.today/blocklists-in-the-fediverse/

As noted in that article, FIRES is attempting to shift away from blocklists in favour of moderation recommendations and advisories. It also introduces finer-grain controls than just “defederate or silence”.

FIRES will support not just domains but also the ability to provide moderation advisories and recommendations on other entities, e.g., hashtags, actors, links, media, etc.

thenexusofprivacy,

@thisismissem Yep! And our discussions certainly helped me think of blocklists as recommended moderation actions, and applying the blocklist as actually taking the actions.

And, agreed on the value of tooling here.

thenexusofprivacy,

@volkris I very much agree on the value of giving users better tools ... I don't think there's anything in FIRES that restricts it to be instance-only. That said I also think there's value in instance-level defaults and in some cases instance-level decisions. Illegal content's a situation where instance-level decisions are needed; instance-level blocks of blocking Nazi, terf, and white-supremacist are another situation where it's valuable. If people personally want to federate with Nazi, terf, and white-supremacist sites, they they can find instances that don't block them.

@thisismissem

thenexusofprivacy, (edited ) to fediverse

Mastodon and today's fediverse are unsafe by design and unsafe by default – and instance blocking is a blunt but powerful safety tool

Part 1 of "Golden opportunities for the fediverse – and whatever comes next"

https://privacy.thenexus.today/unsafe-by-design-and-unsafe-by-default/

Over the course of this multi-part series, I'll discuss Mastodon and the fediverse's long-standing problems with abuse and harassment; the strengths and weaknesses of current tools like instance blocking and blocklists; the approaches emerging tools like and take, along with potential problems; paths to improving the situation; and how the fediverse as a whole can seize the moment and build on the progress that's being made; . At the end I'll collect it all into a single post, with a revised introduction.

This first installment has three sections:

  • Today's fediverse is unsafe by design and unsafe by default

  • Instance-level federation choices are a blunt but powerful safety tool

  • Instance-level federation decisions reflect norms, policies, and interpretations

thenexusofprivacy,

@raf great point -- and, thanks very much, glad you liked it!

thenexusofprivacy,
thenexusofprivacy,

Blocklists in the fediverse

https://privacy.thenexus.today/blocklists-in-the-fediverse/

Part 2 of "Golden opportunities for the fediverse -- and whatever comes next"

This installment has five sections:

  • Blocklists

  • Widely shared blocklists can lead to significant harm

  • Blocklists potentially centralize power -- although can also counter other power-centralizing tendencies

  • Today's fediverse relies on instance blocking and blocklists

  • Steps towards better blocklists

@fediversenews

thenexusofprivacy,

@mikedev In the first section I mentioned that "Other software platforms like Akkoma, Stremas, and Bonfire have some much more powerful tools ... but over 80% of the active users in today's fediverse are on instances running Mastodon software." In the upcoming section on paths forward I mention Streams' commentPolicy and one of the recommendations is broader adoption of platforms that provide better tools for people to protect themselves.

A questions while I have you here, are there any BIPOC-led sites running Streams whose perspective I should get?

thenexusofprivacy, (edited )

@apophis Thanks for sharing the link here. I agree that the pervasive surveillance of today's online world means that it's a lot less safe than it was back in the day (and the Fediverse has plenty of room for improvement on that front as well -- see Threat modeling Meta, the fediverse, and privacy ) but that isn't really what I was focusing on here.

In terms of freedom to shitpost , one of the strengths of the fediverse is that different instances can have different policies, so it can indeed provide a home for that. But people (and instances) also have the freedom to deal with shitposts they see as hate speech, including blocking them. It wasn't an authoritarian power grab when most fediverse instances blocked Gab; it's not an authoritarian power grab when most fediverse instances block poa.st..

And @lebronjames75 re your comment here, hate speech has an impact on the psychological and physical health of its targets, so limiting it is very much a matter of safety. Your blunder describing this is a "completely meaningless emotional description" reveals that you don't value Black, Indigenous, or Muslim people's safety. Thank you for this illustration of why shitposter.club is so widely blocked!

thenexusofprivacy,

@mikedev whoops, it is a typo, thanks! And, thanks for asking.

thenexusofprivacy,

@mikedev I believe it -- limiting replies to connections by default certainly makes a huge difference, so does moderating public groups. But it's also hard to know how much safety in that sector benefits from being low-profile, relatively small, and (at least in my impression) not particularly racially diverse. That's why I'm interested in talking to BIPOC-led sites to get their perspectives.

thenexusofprivacy, to random

Asian Americans Raise Alarm Over ‘Chilling Effects’ of Section 702 Surveillance Program

https://www.wired.com/story/aapi-section-702-letter/

Dozens of prominent Asian American groups are asking United States lawmakers this morning to hold fast in the face of an anticipated campaign by congressional leaders to extend the Section 702 surveillance program by securing it, like a rider, to another “must pass” bill.

thenexusofprivacy, to random

Urgent: Call Congress to stop KOSA:

From @EFF's Call Congress to Stop KOSA;

The Senate may have a simple voice vote in the next week to move the Kids Online Safety Act (KOSA) quickly through the legislature, without debate, but any one senator can stop it with a hold. We need you to call your senator's office today to tell them to stop KOSA. KOSA would censor the internet and would make government officials the arbiters of what young people can see online, and would likely lead to age verification.

Just last week more than 70 LGBTQ+ organizations came out against this dangerous and misguided bill, which would make kids less safe rather than more safe and especially harm LGBTQ+ youth. So it's crucial to stop it from moving forward!

EFF's page makes it easy to call your Senators; or, https://www.stopkosa.com/ makes it easy to send a letter (and find out more about the bill).

getfisaright.net, to random
@getfisaright.net@getfisaright.net avatar

We’re heading into a busy time for FISA activism. FISA Section 702 expires in December 2023 unless Congress re-authorizes it, and the just-introduced bi-partisan Government Surveillance Reform Act (GSRA) combines significant FISA reforms with other important protections.

And conveniently enough, WordPress now makes it easy to connect blogs to the fediverse, an decentralized ecosystem social networks. If you’ve got a Mastodon account, you should be able to follow us at @getfisaright

A lot of people in the fediverse are passionate and knowledgable about privacy and civil liberties … and because FISA affects “non-US persons” as well as Americans, it’s something that’s likely to have broad interest. Of course, as Privacy activism on Mastodon and in the fediverse discusses, there are also some barriers to activism in the fediverse, so we’ll see how well it works out … but @rt4 @eff and other civil liberties groups are already there, so it’s worth a try!

https://getfisaright.files.wordpress.com/2023/11/231108-wp-enter-the-fediverse-4.pnghttps://getfisaright.net/2023/11/08/get-fisa-right-has-entered-the-fediverse/

thenexusofprivacy,

@getfisaright.net@getfisaright.net hello? hello? is this thing on?

thenexusofprivacy, (edited ) to random

PCLOB Recommends Reforms to FISA Section 702 to Protect Americans’ Privacy

https://www.brennancenter.org/our-work/analysis-opinion/federal-oversight-board-recommends-reforms-section-702-fisa-protect

"This morning the Privacy and Civil Liberties Oversight Board (PCLOB), an independent agency within the executive branch, issued a report on Section 702 of the Foreign Intelligence Surveillance Act. The report recommends that Congress enact a wide range of reforms to Section 702, including a requirement that federal agents obtain approval from a judge to access data collected under Section 702 for an American’s communications. Three of the five Board members stated that they would support a “probable cause” standard for both criminal and mixed criminal/foreign intelligence investigations. The Chair of the Board issued a separate statement explaining the necessity for a probable cause standard in these cases."

Elizabeth Goitein of @BrennanCenter has a detailed thread at https://twitter.com/LizaGoitein/status/1707379703479185889

thenexusofprivacy, to random

The Maker of ShotSpotter Is Buying the World’s Most Infamous Predictive Policing Tech

https://www.wired.com/story/soundthinking-geolitica-acquisition-predictive-policing/

SoundThinking, the company behind the gunshot-detection system ShotSpotter, is quietly acquiring staff, patents, and customers of the firm that created the notorious predictive policing software PredPol, WIRED has learned.

In an August earnings call, SoundThinking CEO Ralph Clark announced to investors that the company was negotiating an agreement to acquire parts of Geolitica—formerly called PredPol—and transition its customers to SoundThinking’s own “patrol management” solution.

thenexusofprivacy, to random

38TB of data accidentally exposed by Microsoft AI researchers

"Microsoft’s AI research team, while publishing a bucket of open-source training data on GitHub, accidentally exposed 38 terabytes of additional private data — including a disk backup of two employees’ workstations.

The backup includes secrets, private keys, passwords, and over 30,000 internal Microsoft Teams messages."

https://www.wiz.io/blog/38-terabytes-of-private-data-accidentally-exposed-by-microsoft-ai-researchers

jerry, to random

I’ve been on the fediverse for 8 years and today was my first run in with CSAM on another (otherwise legit) instance. A reminder to instance admins in the USA: you have a duty to report CSAM that lands on your instance to maintain any sort of criminal and civil indemnification. I hope whoever is responsible for that is brought to justice and the minor involved is helped.

Remember friends, strict liability laws are not to be trifled with.

thenexusofprivacy,

It’s a challenging situation. in the US, 18 USC §2258A requires you to "preserve any visual depictions, data, or other digital files that are reasonably accessible and may provide context or additional information about the reported material or person" and "maintain the materials in a secure location and take appropriate steps to limit access by agents or employees of the service to the materials to that access necessary to comply with the requirements of this subsection". As Denise Paolucci of Dreamwidth says in

“This means you must not delete it and the associated information about the poster until law enforcement tells you that you can, but you do have to make it not-visible.”

https://denise.dreamwidth.org/91757.html

But other jurisdictions have different requirements.

@XEJKnol @henryk @jerry

thenexusofprivacy,

It depends on where you are. In the US, you’re required to keep the evidence and make it non-visible - https://infosec.exchange/@thenexusofprivacy/111081908165205259 @AustinB

thenexusofprivacy,

@argv_minus_one no, in the US reporting immunizes you - https://infosec.exchange/@thenexusofprivacy/111081908165205259 has from the legislation

thenexusofprivacy, to random

Great live-tooting thread from today's session by @irenes

https://mastodon.social/@irenes/111048397258634492

thenexusofprivacy,

@irenes thank you! Threads like that are so valuable!

thenexusofprivacy, to privacy

College Board shares SAT Scores with Facebook, TikTok, and others

https://gizmodo.com/sat-college-board-tells-facebook-tiktok-your-scores-gpa-1850768077

"Gizmodo observed the College Board’s website sharing data with Facebook and TikTok when a user fills in information about their GPA and SAT scores. When this reporter used the College Board’s search filtering tools to find colleges that might accept a student with a C+ grade-point average and a SAT score of 420 out of 1600, the site let the social media companies know. Whether a student is acing their tests or struggling, Facebook and TikTok get the details.

The College Board shares this data via “pixels,” invisible tracking technology used to facilitate targeted advertising on platforms such as Facebook and TikTok. The data is shared along with unique user IDs to identify the students, along with other information about how you use the College Board’s site. Tok, and a variety of companies."

@privacy

thenexusofprivacy, to fediverse

Threat modeling Meta, the fediverse, and privacy

https://privacy.thenexus.today/fediverse-threat-modeling-privacy-and-meta/

There's very little privacy on the fediverse today. Mastodon and other fediverse software wasn't designed and implemented with privacy in mind. Even the underlying protocol that powers the fediverse has major limitations. But it doesn't have to be that way!

Meta's new product means that it's critical for the fediverse to start focusing more on privacy. Of course, 's a threat in many other ways as well; that said, the privacy aspects are important too.

For one thing, if Meta does indeed follow through on its plans to work with instance admins and others "partners" who to monetize their users (and their data), people in the region of the fediverse that's not Meta-friendly will need stronger privacy protections to protect their data. And Meta's far from the only threat to privacy out there; changes that reduce the amount of data Meta can gather without consent will also help with other bad actors.

More positively, there's also a huge opportunity here. Privacy's even worse on Facebook and Instagram than it is in the fediverse. So If the fediverse can provide a more private alternative, that will be hugely appealing to a lot of people.

Any way you look at it, now's a good time for the fediverse to take privacy more seriously.

The bulk of the article focuses on threat modeling, a useful technique for identifying opportunities for improvement. It's a long article, though, so if you don't want to wallow in the details, feel free to skip ahead to the section at the end on the path forward and the specific recommendations.

And if you're already bought in to the idea that the
should focus more on privacy, and just want to know how you can help make it happen, it also suggests specific actions you can take -- and there's a section with some thoughts for

Here's the table of contents:

  • There's very little privacy on the fediverse today. But it doesn't have to be that way!
  • Today's fediverse is prototyping at scale
  • Threat modeling 101
  • They can't scrape it if they can't fetch it
  • Different kinds of mitigations
  • Attack surface reduction and privacy by default
  • Scraping's far from the only attack to consider
  • Win/win "monetization" partnerships, threat or menace?
  • A quick note to instance admins
  • Charting a path forward
  • Recommendations

This is still a draft, so as always feedback is welcome. And thanks to everybody for the feedback on previous drafts!

https://privacy.thenexus.today/fediverse-threat-modeling-privacy-and-meta/

thenexusofprivacy,

FYI @jerry a question related to the post I just did on threat modeling meta and privacy. ⬆️. One of the topics I cover is reducing the number public posts (attack surface reduction from a privacy perspective). Local-only posts can make a big difference on smaller instances -- @darius was quoted as 70%. infosec.exchange is the only largeish Mastodon instance I know of that has local-only posts, and I was wondering if you have any estimate of how much they're used here.

[Also local-only posts are good for other reasons as well, so kudos to you for being a point off the curve and supporting them!]

thenexusofprivacy,

@jerry interesting! Most people on infosec probably see it as a public place, which isn't a bad thing at all, just different from the smaller services that tend to run Hometown. Anyhow I'll revise the post to incorporte that info, thanks much!

@darius

thenexusofprivacy,

Great conversation! It could be that I'm overstating the privacy value of local-only posts on larger instances in practice so I'll think about that more.

@darius in the threat modeling post I said

"Even with all of today's software issues, community-oriented instances, configured with a focus on safety (which includes privacy) and an emphasis on local conversations, can be more private (and a lot safer) than Facebook or Instagram."

In an earlier draft I had a few more details (relatively small, not allowing open registration, limited federation to a small bubble of similarly safety-oriented instances), maybe I should put them back.

@ArtBear totally agree on the need for better groups -- including private groups which kbin/lemmy don't have.

And in terms of creating or piggybacking users & harvesting local feeds, the only potential defenses I talked about in the post are legal (TOS) or limiting the damage by doing rate limiting (which has its downsides) ... neither of which are great. So this is a hard-to-mitigate threat for threat actors like intelligence agencies or motivated stalkers targeting specific people or instances,. For Meta in particular though, or anybody trying to target "the whole fediverse", it's enough harder (and potentially legally risky) then other kinds of access that preventing and/or other easy options is useful.

@jerry

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • kavyap
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • khanakhh
  • InstantRegret
  • Youngstown
  • ngwrru68w68
  • slotface
  • rosin
  • tacticalgear
  • mdbf
  • Durango
  • megavids
  • modclub
  • osvaldo12
  • ethstaker
  • cubers
  • normalnudes
  • everett
  • tester
  • GTA5RPClips
  • Leos
  • cisconetworking
  • provamag3
  • anitta
  • lostlight
  • All magazines