Posts

This profile is from a federated server and may be incomplete. Browse more on the original instance.

fabio, to random
@fabio@manganiello.social avatar

Me: “After a long consideration, I’ve decided not to defederate Threads from my personal instance, because the benefits of being able to reach out to my friends and relatives using the open tools that I’m contributing to build and run outweigh the risks, but I’ll keep an eye on it, I may reserve the right to block Threads later, and I respect and understand those who prefer to block them instead“.

Easily triggered strangers: “You self-entitled privileged cis tech bro, you are not doing enough to protect vulnerable minorities from the fascist harassers in the world out there, I hope you die from a gut infection“.

So much for “the Fediverse is an open place that embraces diversity and mutual respect where everybody should feel safe”.

fabio,
@fabio@manganiello.social avatar

I feel like @zuck may achieve his goal of killing the #Fediverse through divide et impera without even needing to kickstart the E-E-E phase.

The simple announcement that #Threads is going to federate has caused such a huge backlash, and so much pressure and retaliatory blocks and defederations towards users and admins guilty of not doing enough to block this perceived “cancer” (including @Gargron), that I feel like the Fediverse is at risk of splintering in two - a subset of instances that decided to deferate Threads, and another subset that decided not to defederate it and wants to cut all the bridges with those who haven’t.

fabio,
@fabio@manganiello.social avatar

@dg3hda what I care the most is the open protocol. If ActivityPub reaches critical mass, then the whole industry may be pushed into adopting an open standard that will lower entry barriers and benefit everyone. If we shut down any efforts towards external integrations (it’s not only about Threads, I’ve seen very similar arguments made also when Wordpress/Tumblr/Flipboard federated or considered to federate) because we want our open solution to be used only in a safe and controlled environment, then the world outside of the Fediverse is more likely to be stuck in proprietary solutions.

fabio, to random
@fabio@manganiello.social avatar

How can you call yourself a civilized country when you can’t even keep guns outside of schools, and your solution to gun violence in schools is either to have armed guards at the entrance or give teachers guns as well?

Is it a civilized Western country or is it the fucking far west?

How come most of the other countries around the world have figured out long ago how to send kids to school without fears of them being shot, and the richest country in the world can’t figure out how to solve such a simple problem yet?

https://www.dw.com/en/tennessee-passes-bill-to-let-teachers-carry-guns-at-school/a-68903939

chillanarchist01,
@chillanarchist01@liberdon.com avatar

@fabio @Bernard For the record, the right of self-defense (and free expression and property ownership) are inherent to every human on the planet. It's just that some people seem to have grown lazy and comfortable being farmed for taxes by people with fancy titles.

fabio,
@fabio@manganiello.social avatar

@chillanarchist01 @Bernard you’re completely disregarding the foundations of modern civilization.

Defense from crime and abuse is a task undertaken by governments in basically all civilized countries.

You have laws that set clear boundaries on what’s allowed, courts whose job is to enforce those laws, and a police force that is trained to use weapons if needed in order to protect other citizens from law abusers.

Civilized societies are shaped around such separations of concerns and accountability - something that you don’t have if every citizen feels that they have the right to pull the trigger whenever they like.

This is how things have been working at least since the times of the Greek polis and the Roman empire.

If you mistrust institutions, elected governments and police forces to the point that you feel that you have the right to take care of yourself however you please, then either you have a problem with rotten institutions, or a mindset problem. Either way, your idea of advanced civilization was already outdated 2500 years ago.

And you can’t see that if you’re the only country following these principles, and you’re the only country with a major gun violence problem, then probably the problem is with your country, not with everyone else.

fabio, to Matrix
@fabio@manganiello.social avatar

I wish that #Matrix (and #Synapse) were better designed.

It has become among the most popular open messaging protocols, but it remains a scalability nightmare.

It's literally the Blockchain of messaging. Its current state is given by the sum of the whole chain of events received since t=0. It's a brittle append-only ledger, and any modifications you do to it are very likely to break it.

Do you have a huge 100GB database and you want to clean up old stuff? Sorry, you can't. There are some non-official solutions for compacting the events, but they're all likely to break your db - and none worked so far in my case.

Do you have users on your instance that entered a busy room on matrix.org? Then you'll get all the join/leave events of that busy room on your db, with no way of deleting them, and nothing to do unless your users exit those rooms.

Did you start your instance by toying with the default SQLite backend, and now that it's become big you want to move to Postgres? Sorry, no official guides provided, only unofficial procedures scattered across a bunch of blogs.

Do you want to change the name of your server? Sorry, you can't. All the events are tightly coupled to your server name. The advised solution is to simply start a new instance.

Such a rigid and brittle implementation shouldn't have become an open de facto standard without much questioning about its poor design decisions.

At the very least, some official tools must be provided to enable admins to compact events. If the size of the database is guaranteed to increase indefinitely, then entry barriers against self-hosting are only going to increase.

tyil,

@fabio The further goes the more obvious it becomes we should've stuck to . The developer time wasted on Matrix is immense, imagine if that effort would've been stuck into making a nice XMPP server implementation that plays well with or , with all the basic features people desire baked in by default. They would've had time to spare on a nice client if they felt this was needed to make a nice complete package to market.

debacle,
@debacle@framapiaf.org avatar

@fabio @tyil @chuck @Menel

Bridging to other protocols:

See by @nicoco for Signal, Whatsapp, Telegram, Discord, Mattermost, Steam, Skype, Facebook and Matrix. (Work in progress, but promising!)

See for IRC. Very stable.

See by @goffi for ActivityPub. (Again, work in progress.)

See for MSN and Yahoo (maybe more).

See by @pantierra and myself for SMS.

fabio, to random
@fabio@manganiello.social avatar

@adiz lol my bad - my tut client mixed up the threads.

The only alternative I see to Matrix right now is XMPP. I still run my own server. But it doesn’t come even close to compete with the number of available bridges (that’s actually the main thing I use Matrix for).

The only thing that can still compete with Matrix when it comes to bridges/integrations is still IRC+bitlbee. But that ecosystem is literally falling apart, it’s largely based on libpurple extensions that often haven’t been touched in years, and of course you can forget decent mobile-native clients.

Or maybe just run alternative servers to Synapse, but so far I’ve had a mixed experience with them - Conduit is definitely snappier, but I’ve had trouble to set up many of my bridges, which seem to be primarily designed for Synapse.

fabio,
@fabio@manganiello.social avatar

@Goffi integrations like Instagram, Telegram and WhatsApp mostly rely on libpurple plugins (like purple-gowhatsapp or purple-instagram) that are either poorly documented, more unstable or lack many features that are available on Matrix bridges.

By own admission of some of these developers, they usually don’t test their plugins against Spectrum2, and mileage may vary a lot. And usually when one of these plugins dies it takes the whole service down with it.

I’m in general a bit unimpressed with the state of libpurple - I feel like it’s becoming a very aged ecosystem maintained by a very limited number of developers that have to make those plugins work for a very wide range of clients. That’s also the reason why I left bitlbee.

Slidge seems to be promising, but I feel like it’s still at an immature stage.

Goffi,
@Goffi@mastodon.social avatar

@fabio I don't think that libpurple integration is great indeed, I've just mentioned it because you were talking of bitlbee, and spectrum2 is the XMPP equivalent.

Slidge is working on tight integration and is the way to go. And Telegram and Whatsapp are supported (I don't see Instagram though).

My point is that gateway ares part of XMPP ecosystem since the early days, and there are many. You say that you use Matrix mainly for its bridges, and I think that XMPP offer is similar.

fabio, to random
@fabio@manganiello.social avatar

Conservatives' approach to solving difficult problems: sign a law that makes the problem illegal, and the problem will solve by itself.

https://www.axios.com/local/miami/2024/03/20/florida-law-desantis-homeless-camps-public-sleeping

fabio,
@fabio@manganiello.social avatar

Look just how happy and proud he looks of having signed a law that prevents homeless people from sleeping on the streets.

As if he had actually solved the homelessness problem.

As if he had actually built more affordable houses for thousands of citizens.

As if tomorrow all the homeless people will suddenly disappear in underground tunnels.

You can definitely recognize the face of a genuine sociopath and morally bankrupt jerk when you see one.

tootsnotreal,
@tootsnotreal@mastodon.social avatar

@dannotdaniel users/fabio@manganiello.social
As you wish, @dannotdaniel 🧀

fabio, to random
@fabio@manganiello.social avatar

I’ve just had to debunk some more pseudoscience garbage shared by my mother on Facebook. The only reason why I still use that platform every now and then is just to make sure that my family doesn’t go completely off the basket case tangent with conspiracy bullshit.

This time, after chemtrails, vaccines that cause autism, Covid intentionally spread by the government, climate change that doesn’t exist, new world order secretely decided in Davos, freemasons and Jews ruling the world, gender theory pushed by a cult of paedophiles, and Putin and Trump as the only saviours, it was the time for the landing on the moon that didn’t happen.

The arguments used by the guy (“there are no signs of the source code that was used by the Apollo missions and the lunar module didn’t have enough thrust to take off”) could be dismantled with a simple Github link and a quick round of the Tsiokovsky rocket equation.

But the original post had hundreds of reshares, while my response didn’t get a single reaction. And my mother, who quit school at the age of 12 and has no clue of what either specific impulse nor escape velocity are, didn’t understand anything of my explanation anyway, and tomorrow she’ll probably just reshare some more bullshit that she wants to believe in.

Does anyone have family members or close friends who are also stuck in this vicious conspiracy cycle that apparently can’t be broken with any efforts of reasoning? How do you cope with that? It is becoming utterly depressing from my side. The conspiracy-inclined side of social media has features that resemble too much those of a cult. I’ve fought to take my family out of a cult years ago, just to see them plunge into another one. The time required to debunk bullshit is high, while the time to reshare it basically zero, and I feel like it’s a war I can’t win. Wondering if there’s a way out that doesn’t end up affecting our own mental health even more.

informapirata,
@informapirata@mastodon.uno avatar
fabio,
@fabio@manganiello.social avatar

@informapirata a proposito, ma va bene se redirigo un paio di boomer intortati da anni di complottismi su Facebook su mastodon.uno come forma di detox temporaneo? 😆

fabio, to bitwarden
@fabio@manganiello.social avatar

Just migrated from to .

Same API, same features, same UI, and support for other DBs than MSSQL.

One single stand-alone application vs. Bitwarden’s 10 Docker containers. 70MB of RAM vs. 2GB. 3MB of db storage vs. 300MB.

Why was a password manager supposed to take so many resources in the first place? Just because it runs on a Microsoft-only stack and on .NET’s inefficient VM? Just because somebody thought that it was a good idea to separate everything into different containers (even icons and 2fa are modeled as separate services in Bitwarden)?

It reminds me of my recent migration from Mastodon to Akkoma. I got more features, 5GB of RAM freed up and 300GB of storage freed up almost overnight.

Writing and running inefficient software that pointlessly consumes all the resources available on a machine should be a crime in a world with limited resources.

It makes me think of how much shitty bloated software like @bitwarden, probably based on awfully inefficient languages and frameworks like Java, Ruby on Rails and .NET, is running out there, pointlessly sucking up resources for doing simple jobs that could easily be done with 99% less resources.

Today’s developers, spoiled by IDEs, powerful machines, docker-compose and shortsighted “just throw more RAM at the problem” approaches, have forgotten how to write efficient software. Time for them to learn how to write good efficient software again. Software doesn’t eat the world. Only shitty software built on shitty framework does.

davidculley,
@davidculley@sigmoid.social avatar

@fabio I’m curious to learn your view on why Ruby on Rails is bad. (Sorry to distract from Bitwarden, the original issue.)

Is the design already bad? Or is the design fine, just the implementation is suboptimal?

I’ve never written Ruby code and am just wondering because DHH is always so proud of what he built.

fabio,
@fabio@manganiello.social avatar

@davidculley my experience with Ruby on Rails application mostly involves running both Mastodon and Gitlab on my servers.

In both the cases, what I’ve noticed is that it’s not the language itself that is slow and heavy (Ruby’s weight is comparable to that of e.g. Python), but Sidekiq.

Sidekiq is the standard framework used by Ruby on Rails application to schedule and run asynchronous jobs (processes, threads…), kind of akin to what php-fpm does for PHP.

In my experience, it’s hard to configure properly, and when not configured properly it ends up with endless pools of active jobs doing all kind of things and sucking up all resources you give to them.

fabio, to random
@fabio@manganiello.social avatar

I’ve decided to go full-in with my own mail server. The #ProtonMail bridge over SSH+VPN tunnel is no substitute for a proper mail server - plus it’s awfully slow when used as a full IMAP server and it breaks the IMAP implementation in a lot of ways.

I’ve created my new domain, gone through the configuration of DKIM/DMARC/SPF like a good postmaster, just to get immediately blacklisted by @spamhaus on my first outbound email.

I’ve been through this before, but in my previous experiences a blacklist removal ticket would be either resolved automatically or within a couple of hours at most.

In this case, nearly 24h and three tickets later and nothing is moving. Not even some directions on how to get removed or an ETA. The mailboxes have already been all migrated with forwarding configured on the old addresses, but outbound email is still broken because being blacklisted by a single company means being unable to communicate with nearly any mail servers out there.

Does anyone have any tips on how a #Spamhaus blacklist removal process can be sped up?

fabio,
@fabio@manganiello.social avatar

@djsumdog yup, indeed mail-tester.com gives a 10/10 to my new mail domain (it’s not the first time I set up one), but only a couple of hours ago @spamhaus removed it from their blacklist.

And I still get quarantine reports from noreply-dmarc-support@google.com when I send emails to GMail addresses informing me that the email has been quarantined (although the report clearly states that all the checks are green).

I’m trying to imagine how the WWW would like if in order to run my own website I first needed the approval of an external company whose blacklist is used by 90% of the browsers out there, if by default every new website is blacklisted, if removal from that blacklist involved opening a ticket to that company, and if even after removal arbitrary major browser producers out there would still arbitrarily refuse to connect to your website (using very opaque acceptance criteria).

I understand that the potential for spam/scam is higher with email than with HTTP, but if my domain checks all the boxes (SPF/DMARC/DKIM/RDNS/TLS) then a “you can send email to everyone, and if something is wrong for too long we blacklist you” policy should apply.

kikobar, (edited )
@kikobar@acc4e.com avatar

@fabio
I run my own email server as well, and I've done so for more than 10 years.

Spamhaus has not been the biggest of my pains, that slot belonged to #Microsoft (particularly #hotmail) and #Yahoo, and it took years for them to finally accept our emails without complains.

I believe some of the problem was because I hadn't implemented DMARC fully, I made 2 big mistakes, I think:

  • Choosing 'quarantine' instead of 'reject' in my DMARC directive. I did this because I wanted to monitor before turning more strict (and then forgot). I should have gone with a more restrictive directive from day 0.

  • Choosing not to send daily DMARC reports to other domains/servers, because I thought these messages were going to increase my traffic with spam servers, so it could be counter productive. I believe it worked the other way, having not done so actually costed some reputation to my server, so if I did it again, I would be shooting DMARC reports from day 0 as well.

Note that making these 2 mistakes won't prevent you from getting 10/10 score for the email tests, however your emails won't reach to the usual suspects.

As your server is considered 'matured' by Microsoft and Yahoo, you may consider using an alternative external email service such as AWS SES. It is extremely cheap and reliable.

@djsumdog @spamhaus

fabio, to random
@fabio@manganiello.social avatar

We understand migrations of this magnitude can be challenging

No, this specific migration should have never happened in the first place.

Manifest V3 is an absolute disaster that makes developers’ lives hard for no reason other than Google being desperate to crackdown on adblockers.

No additional security. No additional features. Nothing to make the pill sweeter. Just Google abusing its dominant position in the browser market to relentlessly push for years for a new standard that makes HTTP requests and intercepting requests at runtime ridiculously hard, so it can tame uBlock and friends. The whole declarativeNetRequest API that replaces the dear ol’ webRequest API is the equivalent of tying developers’ hands behind their back with the only purpose of increasing friction.

Manifest V3 is a technological abomination that greatly limits what extensions can do, and it should have never seen the light of the day.

My browser extensions will stubbornly remain Manifest V2 only. Even if that means being compatible only with Firefox. I wish that more developers did the same, so the Chromium-based extensions ecosystem may become as barren as it deserves to be, and more people would switch to alternative browsers, but of course nobody likes to say no to the browser that serves ~90% of the market. I just hope that the folks at Mozilla won’t get strange ideas and will keep supporting Manifest V2 forever.

https://blog.chromium.org/2024/05/manifest-v2-phase-out-begins.html

collectifission,
@collectifission@greennuclear.online avatar

@fabio This is why Firefox is important.

fabio,
@fabio@manganiello.social avatar

@collectifission I don’t feel very comfortable with Mozilla’s stance on the topic either:

“Firefox, however, has no plans to deprecate MV2 and will continue to support MV2 extensions for the foreseeable future. And even if we re-evaluate this decision at some point down the road, we anticipate providing a notice of at least 12 months for developers to adjust accordingly and not feel rushed.”

I really don’t like that “…even if we re-evaluate … 12 months notice…“ part. I know that the folks at Mozilla may have the best intentions, but with all Chromium-based browsers leaving V2 behind Mozilla will basically become the sole maintainer of the old Manifest API. Brave at some point announced that they’ll keep supporting it as well, but 1. I don’t trust anything that comes out of Brendan Eich’s mouth, and 2. if they decide to keep supporting V2 then they’ll basically have to maintain, forever, their own fork of Chromium that keeps the old API.

I feel like Google used its weight and played the long game quite well here. Of course we all know why V3 came out. Of course nobody likes it. But hey, what are you going to do? Maintain your own fork of V2 without having even a fraction of Google’s resources, and to serve probably <5% of the market with a tiny subset of extensions that didn’t migrate? Hmm…

fabio, to threads
@fabio@manganiello.social avatar

I’ve had some quite passionate (euphemism) discussions in the past couple of days with people who accused me of “throwing minorities under the bus” or “allowing Meta to scoop up everybody’s posts” just because I’ve temporarily decided not to defederate from my personal Akkoma instance.

What’s interesting is that some of those accusations came from people who, in some cases, had their profiles fully public and searchable, on instances with webfinger enabled and without authenticated API constraints.

Their posts are already available on any search engine, searchable on Mastodon, their profiles can already be enumerated via API, and, even if their instance blocks another one, users on the blocked instance may still be able to see their content (especially if reshared/quoted) through unauthenticated API calls. But yeah, they think that the problem is with my tiny personal instance not defederating what they don’t like.

I’ve got the impression that there’s a lot of confusion on the on how to customize the and of your content, and how to make sure that only those you wish will ever be able to see it.

In order to prevent pointless retaliatory blocks/defederations towards instances whose only fault is not to block what others want them to block, and in order to prevent the Fediverse to splinter into small islands along totally arbitrary fracture lines on the basis of unfounded beliefs about how it works, I’ve put together a sequence of steps to check if your profile and your content are really private and sealed from unauthorized access (if that’s what you wish) - thanks to @gruff for the suggestion, and thanks to @evan for validating some of my assumptions.

@Gargron you’re welcome to validate my hypothesis about how AUTHORIZED_FETCH and DISALLOW_UNAUTHENTICATED_API_ACCESS work on Mastodon - I knew about AUTHORIZED_FETCH before, but I see that its functionality is now split on two environment variables, and I’m not sure if both instance A and instance B need to have it enabled to prevent content leak towards blocked instances from reshares/quotes.

cc @fediverse @privacy

evan,
@evan@cosocial.ca avatar

@fabio @fediverse @privacy @Gargron @gruff So, you're telling people to disable Webfinger so they can't be found? That also makes it really hard to follow people. I think a lot of fediverse software just won't work without Webfinger.

evan,
@evan@cosocial.ca avatar

@fabio @fediverse @privacy @gruff

Also, I think it would be a good idea to have a fork or spin of Mastodon where these options are the out-of-the-box defaults.

fabio, to Bulgaria
@fabio@manganiello.social avatar

I’m not sure who’s to blame in the #EU for giving up to the lobbying efforts of #Apple and #Microsoft (I can’t think of anything short of public official corruption behind the last minute reversal of the #DMA definition of “gatekeeper” for the two most valuable companies in the world), but these are some consequences of this decision:

  1. If Apple isn’t labelled as a gatekeeper when it comes to #iMessage (an app used by 1.3B people), then they won’t have to comply and open up their walled garden to 3rd-party clients, while, for example, Messenger and WhatsApp will have to.
  2. If Microsoft isn’t labelled as a gatekeeper when it comes to #Edge (a piece of software installed as the default browser on an OS used by at least 1.5B people), then they’ll be free to keep rewriting https:// URLs as microsoft-edge:https:// just for the sake of intercepting everything and breaking compatibility, they’ll no longer have to provide browser selection pop-ups on fresh Windows installations, and they can keep opening all the web views and PWA on Windows devices in their own browser without providing alternatives - while, for example, Chrome and Safari will have to comply.

Shame on the EU for bending to them. Shame on these companies. Shame on their filthy lobbying efforts. Shame on everybody who uses their products.

The DMA and the “gatekeeper” definition was supposed to be the proof that these companies are now run by responsible adults. Being a “gatekeeper” is the acknowledgment that you are running platforms used by billions, and with great power comes great responsibility - towards society, towards the rules of the open market, towards your own competitors.

My employer might be included on the list soon as well, and I’m more than happy to comply. Large tech companies like ours have enjoyed lavish profits and outrageous market shares and ignored anti-competition laws for too long: now it’s time to prove that we’re all grown ups who want to play by the rules.

Instead, Apple and Microsoft have unleashed their overpaid legal counsels and lobbying crews, who have engaged in a pathetic dance to gaslight Brussels officials, and force them to say that one of the largest messaging platforms and one of the most used browser in the world, run by the two companies with the highest evaluation in the world, for some reason are not expected to play by the rules written for everyone else.

Shame on them, and shame on the childish selfish sociopaths who run them.

https://www.theregister.com/2024/02/14/apple_microsoft_dma_exemptions

kikobar,
@kikobar@acc4e.com avatar

@fabio
"Following a thorough assessment of all arguments, taking into account input by relevant stakeholders, and after hearing the Digital Markets Advisory Committee, the Commission found that iMessage, Bing, Edge and Microsoft Advertising do not qualify as gatekeeper services."

If #Microsoft #Edge and #Apple #iMessage don't classify as 'gatekeeper', who does?

#DMA

bryansmart,

@fabio @simon I doubt MS/Apple bamboozled the politicians. It's more likely the politicians are protecting their own personal investments. Happens all of the time.

fabio, to RaspberryPi
@fabio@manganiello.social avatar

I think that @Raspberry_Pi has got a problem.

PiCameras are amazing pieces of hardware with a software that could probably be managed better.

The original PiCamera libraries haven’t seen a commit in 4 years, as they were deprecated with the Bullseye release. I can still run some code that uses that library on some old just because I’m keeping them on some ancient version of Raspbian. Otherwise, raspistill and friends break in many possible ways both on both Bullseye and Bookworm, and the package doesn’t even install on Arch ARM because raspberrypi-firmware is now gone.

A couple of years down the line, and its replacement, PiCamera2, is still in beta. It can be installed through a relatively smooth process only on the last two versions of RPi OS via apt, and it’s otherwise very hard to get installed on any other distro - on Arch it depends on packages that officially aren’t even available for ARM (like python-av), when installed via pip it tries to build the world even if some packages (like numpy) are already installed on the system, and I didn’t manage to get it to run on Ubuntu because of permission issues.

It’s really a pity because a vibrant ecosystem of camera apps and scripts had been written using the old version of PiCamera, which could do a lot of things with very low entry barriers. Then a sudden deprecation was announced without a viable alternative, and a couple of years down the line that alternative isn’t quite stable yet. I’ve eventually resorted to leverage the native v4l2 integration over ffmpeg as a cross-platform workaround, but that moves most of the burden to the CPU and I’m not really leveraging this hardware at its best, plus it probably raises the technical bar for a lot of hobbyist makers.

Why was something so important to many users deprecated without any stable alternative on the horizon?

aallan,
@aallan@mastodon.social avatar

@fabio Because the author of picamera stopped supporting it? Wasn't our decision. The original picamera library wasn't written, or supported, by Raspberry Pi. The new picamera2 library was written in-house, and is extensively documented, https://datasheets.raspberrypi.com/camera/picamera2-manual.pdf. It might not do what you want (yet), but it's being actively maintained and worked on.

tripplehelix,
@tripplehelix@fosstodon.org avatar

@aallan @fabio I've found it incredibly easy to use as you can use old 3rd party camera v1's and the latest v3's with the same python code. The extra settings and great documentation make up for what was lost with picamera for me.

fabio, to Youtube
@fabio@manganiello.social avatar

So many interesting insights in this research.

First, the technique used by these researchers to find out the actual number of videos hosted on #YouTube is definitely unorthodox (and inefficient), but it worked. Since Google won’t provide these numbers, and relying on traditional crawling techniques is likely to bring to the surface only videos that enough people have already interacted with, researchers have run an algorithm on a bunch of supercomupters that simply brute forced all the possible combinations of YouTube ID strings, and kept track of the requests that didn’t end up with a 404.

Second, even a conservative estimate of the number of videos on the platform is massive. 14 billion. Or nearly two videos for each human alive. With an unfathomably long tail.

To dig more in detail, videos with 10,000 or more views account for nearly 94% of the site’s traffic overall, but less than 4% of total uploads - a quite extreme version of the 80-20 rule. About 5% of videos have no views at all, almost 75% have no comments, and even more have no likes.

This sheds an interesting light on what YouTube actually is. Not a product that should be monetized at all costs, but a collective memory of basically all the media content that the human race has created in the past two decades. It’s vital infrastructure that should require no entry barriers, and it should be treated as such.

Most of the minutes of videos stored on YouTube’s servers aren’t from MrBeast, Veritasium or Tom Scott. They are from church services, weddings, condo-board meetings, graduation ceremonies, school lectures, and all other things that humans record and want to save on a permanent storage - for themselves, their families, their co-workers, their friends or their classmates. With absolutely no intention of monetization, wider reach, or whatever stinky corporate metrics YouTube PMs are obsessed with.

When you store most of the media content that our whole species created in the past two decades, you have a strong duty of making it accessible to everyone, all the time, with the smallest amount of friction and UX disruption. And that’s exactly the opposite of what Google has been doing lately.

I don’t see a use-case where we should keep publishing to YouTube, unless you are a professional creator with some actual following there. It should never be used for storing things to be shared only with a small circle, and even less as a permanent storage of your memories. Google can’t be trusted, and yet we’ve donated them all of our creations of the past 20 years, thinking that they’ll take care of them forever - remember the “unlimited storage, forever” promise made by GMail back in the day?

https://www.theatlantic.com/technology/archive/2024/01/how-many-videos-youtube-research/677250/

faithpeterson,

@fabio
I don't disagree. Maybe a role for public libraries here And I wonder what the practical alternatives are. I can imagine governments at any level maintaining their own digital media archives, but I think they'll likely store only what's legally required (text of meeting minutes f. Ex.). Most community orgs and families lack the means and/or skill. Local newspapers did some of this decades ago. But in fact most past analog material is gone too.
@smallcircles

fabio,
@fabio@manganiello.social avatar

@faithpeterson @smallcircles I think that there alternatives to public libraries in the digital world too.

First, the job of the Internet Archive is enormous and incredibly understated. Not only it barely receives any funding from governments (those who fund the preservation of physical books and cultural heritage should also fund the preservation of digital media, but that's not happening), but their activities are increasingly seen by Google and friends under a hostile lens.

Countless hostile actions are taken almost on a weekly basis against youtube-dl, yt-dlp, and other tools whose job is simply to download a media file from a URL. Instead of being properly funded, archivists have to constantly operate on the edge of illegality and fears that the tools that they use to scrape the Web are just one DMCA takeover or API change away from being pointless.

Not only there are no financial incentives to ensure that the job of archiving the Web can proceed unimpeded, but the laws of online content are also extremely skewed towards the platforms (and, in part, a tiny minority of high-profile creators) rather than the users - and even less towards our sons and grandsons who may be around when Google and its servers may no longer be around.

Second, if you already know that publishing your content to some closed platforms comes with many risks when it comes to accessibility, preservation and trustworthiness of the platform itself, then we shouldn't upload content to that platform in the first place. The "I can't migrate" excuse only really applies to those who already have tons of followers on their YouTube channel - and, as underlined by this study, that may be <1% than the total number of people who uploaded at least a video to YouTube. Even without running our own servers, we can just leverage services like PeerTube that rely on open protocols and decentralized architectures to distribute the media.

fabio, to history
@fabio@manganiello.social avatar

#Platypush 1.0 is out!

It’s been 10 months and 1049 commits since the latest release of Platypush, 7 years since the first commit, and 10 years since the first release of its ancestor, https://github.com/blacklight/evesp.

The past few months have been quite hectic and I have nearly rewritten the whole codebase, but I feel like the software is now at a stage where it’s mature and stable enough to be used by a larger audience.

The changelog is quite big, but it doesn’t even cover all the changes, as many integrations have been completely rewritten.

The biggest (breaking) change is the merge between plugins and backends. Now, except for those integrations that actually listen for messages and execute them (like HTTP and Redis), all the other integrations are plugins. This greatly simplifies the configuration and removes a lot of confusion for new users.

The Docker support has been greatly improved too. There are now officially supported multi-arch images for Alpine, Debian, Ubuntu and Fedora, an official docker-compose.yml file, and both the platydock and platyvenv utilities have been almost completely rewritten to seamlessly automate the creation and configuration of containers and virtual environments (respectively) starting from a single config.yaml.

And the Python API has become much simpler and consistent. No more __init__.py files that the user had to manually create in each subfolder of scripts, just drop a .py file with your automation in the scripts dir and it’ll be picked up. Moreover, the most common imports are now available on top level as well, and there’s no more need to create procedures/hooks/crons with varargs:

from platypush import run, when
from platypush.events.sun import SunsetEvent

@when(SunsetEvent)
def sunset_lights_on():
  run('light.hue.on')

There’s also a revamped documentation portal, which now includes both the wiki and the plugin reference.

Most of the integrations have been rewritten at different degrees, and in the process many bugs have been squashed, many features added and many APIs updated to be more consistent, so make sure to check the documentation pages of your integrations in order to migrate.

And if you have more requests or questions, feel free to open a ticket, a PR or ask on the Lemmy server.

https://blog.platypush.tech/article/Platypush-1.0-is-out

kikobar,
@kikobar@acc4e.com avatar

Congrats @fabio
Thanks for sharing!

fabio, to ai
@fabio@manganiello.social avatar

#NetBSD joins the ranks of software projects that ban #AI generated code.

How they are going to enforce such ban is an obvious question lingering in the air.

Does it include only cases like “hey #ChatGPT write a suite of unit tests for this class”? Or also cases where #Copilot simply autocompletes a for loop while I’m typing it?

In the latter case, how would a hypothetical reviewer enforce the ban? How would the for loop autocompleted by Copilot, or the boilerplate population of hashmap values, look any different than one I would write myself?

And if the issue is with any code that isn’t directly written by a human, then why stop at modern AI generation? Why not include LINTers and traditional IDE autocomplete features?

I have no doubt that the projects that are announcing these no-AI policies have good intentions, but it’s probably time for all of us to have an honest talk.

Code completion isn’t a clear cut binary feature. It’s a big spectrum that goes from the old exuberant ctags to ChatGPT writing whole classes.

And code completion shouldn’t be banned. If it makes a developer more productive, and if the developer understands the code that is being completed, then such bans are akin to a “drivers should only use cars with manual transmission because we feel that it’s more manly”. It’s a conservative and elitist act of shunning out new productive tools because we can’t understand them and regulate them properly.

And more people need to call out the bluff: in cases where the AI only completes a few lines of code, its basically impossible to tell if that snippet was written by a human or an AI assistant.

https://tech.slashdot.org/story/24/05/17/007240/netbsd-bans-ai-generated-code?utm_source=rss1.0mainlinkanon&utm_medium=feed

netbsd,
@netbsd@mastodon.sdf.org avatar

@fabio It is enforced by legal contract that every committer is required to sign.

BSD projects have historically had very strict rules on the copyright of submissions for very good reasons - https://en.wikipedia.org/wiki/UNIX_System_Laboratories%2C_Inc._v._Berkeley_Software_Design%2C_Inc.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • ngwrru68w68
  • everett
  • InstantRegret
  • magazineikmin
  • thenastyranch
  • rosin
  • GTA5RPClips
  • Durango
  • Youngstown
  • slotface
  • khanakhh
  • kavyap
  • DreamBathrooms
  • provamag3
  • tacticalgear
  • osvaldo12
  • tester
  • cubers
  • cisconetworking
  • mdbf
  • ethstaker
  • modclub
  • Leos
  • anitta
  • normalnudes
  • megavids
  • lostlight
  • All magazines