joebeone,
@joebeone@techpolicy.social avatar

Last week, the @internetsociety filed an amicus brief before the Supreme Court of México, in Richter v. Google, an important case in intermediary liability on the Mexican Internet, and our first non-US legal intervention. 1/

karlauerbach,
@karlauerbach@sfba.social avatar

@joebeone @internetsociety On the other hand, consider if we had an internet that didn't have the IPv4 address limitations and was more like IPv6. In that world everyone who could afford a Raspberry Pi could be a direct publisher.

The existence of intermediaries was perhaps somewhat driven by the difficulty of self-publishing, a difficulty that is fading with the deployment of no-NAT IPv6 and increasing upstream bandwidths to individuals' homes.

I am aware that intermediaries also exist because they are convenient and can amortize costs over a larger base. Those attributes perhaps [and probably] dominate my self-publish argument.

Nevertheless, the net has evolved to a point where is quite feasible for nice means of publishing and discussing to be constructed without intermediary holding and redistributing content produced by others. Distributed systems, such as Mastodon take a partial step in that direction.

danyork,
@danyork@mastodon.social avatar

@karlauerbach @joebeone @internetsociety I completely agree that publishing and discussion CAN easily happen these days without intermediaries. A lot of the various Fediverse systems are a good step in that way.

My personal issue is that after decades of running my own systems, I don't want to anymore! I just want to write. I'll gladly pay or use an intermediary to take care of that for me.

Similarly, I'll use an intermediary for caching/CDN purposes so that pages load fast globally.

1/2

danyork,
@danyork@mastodon.social avatar

@karlauerbach @joebeone @internetsociety

The other place I think intermediaries are with us for better or worse is... discovery of content.

I'm not really aware of how we've cracked that particular nut without centralized databases or systems, whether those are for a search engine, or for a directory site, or for a social media site where links are shared.

If you have pointers about that, I'd love to see them.

2/2

karlauerbach,
@karlauerbach@sfba.social avatar

@danyork @joebeone @internetsociety Resource discovery on the net is an interesting research topic.

I came at the problem a couple of decades back by asking "what is the best existing resource discovery system?"

My answer was "Insects and pheromones".

I posited a system in which resource announcement packets would be randomly generated and forwarded, subject to a time-to-live TTL. That had the nice property of proximity - you were more likely to "smell" a resource if you were closer to it. Multiple announcements could fit into a single packet.

"Designated noses" could act as collectors and could advertise themselves the same way.

The problem I had was encoding the announcements into the space of UDP packets. It can take a lot of bits - nothing close to what is available in a pheromone molecule.

I want to resume the project, one of many I have on ice (others have to do with self-healing/homeostasis of the net, and increase robustness/reduced brittleness.)

danyork,
@danyork@mastodon.social avatar

@karlauerbach @joebeone @internetsociety Interesting ideas. I like your other topics, too. Increased robustness / resilience is definitely something we need in a time of bizarre and extreme weather.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • magazineikmin
  • rosin
  • Youngstown
  • ngwrru68w68
  • khanakhh
  • slotface
  • InstantRegret
  • mdbf
  • GTA5RPClips
  • kavyap
  • thenastyranch
  • DreamBathrooms
  • everett
  • tacticalgear
  • JUstTest
  • tester
  • Durango
  • cubers
  • ethstaker
  • cisconetworking
  • modclub
  • osvaldo12
  • Leos
  • normalnudes
  • megavids
  • provamag3
  • anitta
  • lostlight
  • All magazines