fruitycoder,

IPFS I’m really glad things like nerdctl and guix support it, but I wish more things too advantage of the p2p filesystem.

Petals.Dev and hivemind ml P2P AI inference and training seem like the only true viable options to make foundational models that are owned soley by authoritarian government s and megacorps.

Matrix for federated general real time communication. (Not justs chat, video, images, but just data, with third room being on the cooler demos for what is possible)

Activity Pub for asynchronous communication between servers. The socialmedia aspect is obviously the focus and the most mature, but I’m also excited for things like Forgejo (Codeberg.org) and Gitlab’s support.

I am also excited for QUIC for increased privacy of metadata and reduction of network trips.

kevincox,
@kevincox@lemmy.ml avatar

The problem with IPFS is that kubo sucks. I used it for a while and it is always burning CPU doing nothing, failing to publish anything to the DHT and fetching files is slow. GC collects files that are supposed to be “pinned” by existing in MFS and so many other bugs all of the time.

I would love to see a new server take off that was built with quality in mind.

I think the core protocol is pretty good (although I think they should have baked in encryption) but even the mid-level protocols like UnixFS and DAG whatever are mired in constant change for no real reason as the developers jump on new fads.

fruitycoder, (edited )

Slow and requires additional tooling to run normally. Just not a lot of development on the core pieces tbh. Wasm support for example could make deployments way simpler (implement an ipfs proxy in any browser or server that supports wasm) but the ticket for that kind of died off. There is a typescript implementation, helia, that I haven’t checked out yet.

We are honestly kind of in a decentralization winter again, with ActivityPub being one of the few survivors gaining traction from what it seems. OpenSource luckily doesn’t just up and die like that, so I still have hope for some next spring.

Mango,

FTP

Seriously guys, let’s share files the old fashioned way. Without bullshit.

Badabinski, (edited )

I'd like to interject for a moment. What you're referring to as FTP is, in fact, smelly hot garbage.

For context, I wrote this while waiting for a migraine to pass. I was angry at my brain for ruining my morning, and I like to shit on FTP. It's fun to be hyperbolic. I don't intend for this to be an attack on you, I was just bored and decided to write this ridiculous rant to pass the time.

I must once again rant about FTP. I've no idea if you're serious about liking it or you're just taking the piss, but seeing those three letters surrounded by whitespace reminds me of all the bad things in the world.

FTP is, as I've said, smelly hot garbage, and the infrastructure built to support FTP is even worse. Why? Well, one reason is that FTP has the most idiotic networking model conceivable. To see how crazy it is, let's compare to a more sane protocol, like HTTP (for simplicity's sake, I'll do HTTP/1.1). First, you get the underlying transport protocol stuff and probably SSL. The HTTP client opens a connection from some local ephemeral port to the destination server on port 80/443/whatever and does all the normal protocol things (so syn->synack->ack and Client Hello -> Server Hello+server cert -> client kex+change cipher -> change cipher -> encrypted data). FTP does TCP too! Same same so far (minus SSL, unless you're using FTPS). Next, the HTTP client goes like this:

GET /index.html HTTP/1.1
Host: www.whatever.the.fuck
# a bunch of other headers

and you know what fucking happens here? The fucking server responds with the data and a response code on the same goddamn TCP connection. You get a big, glorious response over the nice connection you established:

200 OK
# a bunch of headers and shit

HERE'S YOUR DAMN DATA NERD

So that's nice, and the client you're using to read this used that flow (or an evolution of that flow if you're using HTTP/2 or HTTP/3). So what does FTP do? It does one of two really stupid things depending on whether you're using active or passive mode. Active mode is the default for the protocol (although not the default for most clients), so let's analyze that! First, your FTP client initiates a TCP connection to your server on port 21 (by default), and then the server just sends this:

<--- 220 Rebex FTP Server ready.

ok, that kinda came out of nowhere. You're probably using a modern client that saves you from all of the godawful footguns, so it then asks the server what it supports:

---> FEAT
<--- 211-Supported extensions:
<---  AUTH TLS;SSL;
<---  CDUP
<---  CLNT
# A whole bunch of other 4 letter acronyms. If I was writing an FTP server, I'd make it swear at the user since there are a lot of fun 4 letter words

There's some other bullshit we don't care about right now, although highlights include sending the username and password in plain text. There's also ASCII vs binary mode. WE'LL GET BACK TO THAT. :|

So then we want to do a LIST. You know what happens in active mode? Your computer opens up some random fucking TCP port. It then instructs the FTP server to CONNECT TO YOUR GODDAMN COMPUTER. Your computer is the server, and the other side is now the client. I would post a more detailed overview of the FTP commands, but most servers on the internet disable active mode because it's a goddamn liability. All of the sudden, your computer has to be internet facing with open firewall ports, and that's just a whole heap of shit.

I'm probably not blowing many minds right now because people know about this shit. I just want to mention that this is how FTP was built. The data plane and control plane are separate, and back in 19XX when this shit was invented, you could trust your fellows on ARPANET and NAT didn't exist and sure HAM radio operators here's the entire goddamn 44.0.0.0/8 block for you to do packet switched radio. A simple protocol for simple times, back before we knew what was good and what was bad.

So, active mode sucks! PASV is the future, and is the default on basically all modern clients and servers! Passive mode works exactly the same as the above, except when the client goes to LIST, the server opens some random TCP port (I've often seen something like 44000-44010) and tells the client, "hey you, connect to 1.2.3.4:44000 to get you your tasty data." Sounds great, right? Well, there's a problem that I actually touched on in my last paragraph. Back when this dogshit was first squeezed out in the 70s, everyone had a public address. There were SO MANY addresses! 4 billion addresses? We'll never use all of those! That is clearly not the case anymore. We don't have enough addresses, and now we have this wonderful thing called NAT.

Continued in part 2.

Badabinski, (edited )

PART 2.

NAT, much like the city of Phoenix, is a monument to man's arrogance. Fuck NAT and fuck FTP. If your FTP server is listening directly on a public IP address hooked up directly to a proper router, then none of this applies. If you're anything like me, the last company I worked for (a small startup), or my current company (many many thousands of employees making software you know and may or may not hate, making many billions of dollars a year), then the majority of your servers are living in RFC1918 space. Traffic from the internet is making it to them via NAT (or NAT with extra steps, i.e. L4 load balancers).

A request comes in for $PUBLIC_IP TCP port 21 and is forwarded to your failure of a boxen at 10.0.54.187. Your FTP server is a big stupid idiot and doesn't know this. It thinks that it's king shit and has its own public IP address. Therefore, when it's deciding what ADDR:PORT it's going to tell the stupid FTP client to connect to, it just looks at one of the adapters on the box and says "oh, I'll tell this client on the internet to connect to 10.0.54.187:44007" and then I fucking cry. The FTP client is an idiot, but the IP stack on the client's home/business router is not and says "oh, that's an address living in RFC1918 space, I shouldn't send that out over the internet" and they don't get the results of their LIST.

So, how do you fix this? Well, you fix it by not using FTP. Use SFTP USE SFTP USE SFTP FOR GOD'S SAKE. But since this world is a shit fucking place, you have two options. The best option is to configure your FTP server to lie about its IP address. Rather than being honest about what a fool it is, you can tell it to send your public IP address to the client rather than the network adapter IP address. Does your public IP address change? Fuck you, you get to write a daemon that checks for that shit, rewrites your FTP server config, and HUPs the bastard (or SIGTERMs it if your server sucks and can't do a live config reload).

Let's say that you don't want to do that. Let's say you work at a small company with a small business internet plan that gives you static IPs but a shitty modem. Let's say that you don't know what FTP is or how it works and your boss told you to get it set up ASAP and it's not working (because the client over in Bendoverville Arkansas is being told to connect to a 10.x.x.x address) and it surely must be your ISP's fault. So you call up Comcast Business/AT&T/Verizon/Whoeverthefuck and you complain at their technicians for hours and hours, and eventually you get connected to a human that knows what the problem is and tells you how to configure your stupid FTP server to lie like a little sinner. The big telco megacorps don't like that. They don't want to waste all those hours, and they don't want to hire too many people who can figure that shit out because it's expensive. You wanna know what those fucking asshole companies did?

Continued in part 3.

Badabinski, (edited )

PART 3.
They made their STUPID MODEMS FUCK WITH THE FTP PACKETS. I have personally experienced this with Comcast Business. The stupid piece of shit DOCSIS modem they provide intercepts the FTP packet from your server saying "oh, connect to this address: x.x.x.x:44010" and they rewrite the fucking address to the public IP. There is no way to turn just this horse piss off. Now, for average business customers, this probably saved Comcast a bunch of money in support calls. However, if you're using the so-called bridge mode on that degenerate piece of shit-wrapped-silicon (where rather than allowing the modem to give you a DHCP address, you just configure your system to have one of the addresses in the /29 space and the modem detects that and says oh okay don't NAT traffic when it's going to this address, just rewrite the MAC and shunt it over the right interface), then something funny happens. The modem still rewrites the contents of the packet, but it uses the wrong fucking IP address! Because the public IP that your server is running on is no longer available to the modem, the modem just chooses another fucking address. Then, the client tries to connect to 1.2.3.5 instead of 1.2.3.4 where your server is listening, the modem says "hey I'm 1.2.3.5 and you can fuck off, I'm dropping your SYN for port 44010", and I get an angry call from the client asking why they can't download their files using this worthless protocol. I remember having a conversation like this:

Me: "Just use SFTP on port 22!"
Client: "No! FTP is faster/more secure/good enough for my grandfather good enough for me/corporate won't allow port 22."
Me: "Comcast is fucking me right now. What if we lied and served SFTP over port 21?"
# we try it
Client: "It's not working! I can't even connect!"

I couldn't connect either. I couldn't connect to anything. Trying to do SFTP over port 21 caused the stupid fucking modem to CRASH.

Are you starting to see what the problem is? It's like Microsoft preserving bugs in Windows APIs so that shitty software doesn't break, and then they end up doing crazy gymnastics to accomodate old shit like the Windows 8 -> Windows 10 thing where they couldn't use "Windows 9" because that would confuse software into thinking it was running "Windows 95" or "Windows 98". FTP has some bugfuck crazy design decisions that we've collectively decided to just "work around," and it leads to fucking gymnastics.

Speaking of bugfuck crazy design decisions, FTP's default file transfer mode intentionally mangles data!

Continued in part 4.

Badabinski, (edited )

PART 4.

You expect a file transfer program to reliably and faithfully transfer your files, byte-for-byte, from one system to another. FTP spits in your face and shits on your chest. You know how Linux uses LF (i.e. \n) for newlines and Windows uses CRLF (i.e. \r\n) for newlines? Pretty annoying, right? Well, FTP's ASCII mode will automatically rip off those \r characters for you! Sounds pretty sweet, right? Fuck no it's not. All of the sudden, your file checksums have changed. If you pass the same file back to a Windows user with a different and more sane file transfer system, then they get a broken file because FTP didn't mind its own fucking business. If you have a CRLF file and need an LF file, just explicitly use dos2unix. Wanna go the other way? unix2dos. The tool has been around since 1989 and it's great.

Now, what if you're not transferring text, but instead are transferring a picture of a cute cat? What if your binary data happens to have 0x0D0x0A somewhere in it? Well, ASCII mode will happily translate that to 0x0A and fucking ruin your adorable cat picture that you were going to share with your depressed significant other in an attempt to cheer them up. Now the ruined JPEG will remind them of the futility of their situation and they'll slide even deeper into cold emptiness. Thanks, FTP.

You can tell your client to use binary mode and this problem goes away! In fact, modern clients do this automatically so your SO gets to see the adorable fuzzy cat picture. But let's just stop and think about this. Why use a protocol that is dangerous by default? Why use a protocol that supports no form of security (unless you're using fucking godawful FTPS or FTP over SSH)? Why use a protocol that is so broken by design that small business hardware has been designed to try to unfuck it? Is it faster? I mean, not really. SFTP has encryption/decryption overhead, but your CPU is so fast that you'd need to transfer at 25+ Gb/s to notice it. Is it easier? Fuck no it's not easier, look at all of the stupid footguns I've just mentioned. Is it simpler? The line protocol is simple, but so is HTTP, and HTTP has a much simpler control flow path (merging the data and control planes is objectively the right thing to do in this context). And shit, you want a simple protocol for cases where you don't have a lot of CPU power? Use fucking TFTP. It's dogshit, but it was intentionally designed to be dogshit so that a fucking potato could receive data with it.

There is no task that is currently being done with FTP that couldn't be done more easily, more securely, and more quickly with some other protocol (like fucking SSH and SFTP, which is now built into fucking Windows for god's sake). Fuck FTP.

Mango,

I read the first two and kinda gave up my dude. Here’s my deal. I get that it’s bad under the hood. What else can I use that lets me and a friend pretend we just have folders in each other’s computers with just a port forward, IP, and a password?

Natanael,

That’s not even the type of setup you should use. Use a VPN of the type designed for games and IoT stuff, like ZeroTier, n2n, and more. Then you set up a local file share using something like Samba, only accessible by the people who can connect to your local network via the VPN.

The public facing VPN code will be MUCH more hardened against attack than your typical sharing tool with port forwarding.

Mango,

I’m less worried about attack than barriers to sharing.

BaldProphet,
BaldProphet avatar

ZeroTier is pretty easy to set up, but at the point where you're worrying about "barriers to sharing" you should probably using a cloud service anyway.

Mango,

Ahhh but that’s the thing. A middleman being necessary is very against my values. I don’t want there being someone else or there limiting or telling me no. I’m not letting someone else’s DMCA compliance tell me what I can have. I’m also not really interested in non FLOSS.

Badabinski, (edited )

You should be able to just use ssh/sftp. There are lots of great clients, and you can absolutely still use usernames and passwords, no public/private key stuff required. You can even use ssh and scp right from powershell on Windows boxen if you're so inclined. There's winscp, and if you want filesystem mounting, there's this: https://github.com/winfsp/sshfs-win
For macos and Linux, the options are far more plentiful.

Edit: there's also file pizza, which is a file transfer thingy with no middle man that's open source, although it's not copyleft AFAICT: https://github.com/kern/filepizza
and similar tools. Not really what you're after, I just think it's neat.

Mango,

It’s probably been 15 years since I used ssh. I’m gonna guess there’s better UI for it now, or at least a UI at all.

Badabinski, (edited )

There are definitely a lot of good options out there. What are you using right now for regular old FTP? The odds are actually pretty good that it already supports SFTP. A lot of file management applications do both and lump them together, even though they're completely different protocols (sftp is from the late nineties).

If it doesn't, then I don't know what OS you're using, so I'll just recommend options for the big 3. For Windows, there's WinSCP. For MacOS there's Cyberduck. Most file managers on Linux distros let you just type sftp://me@wherever in the navigation bar, meaning you get a totally seamless experience with the rest of your FS.

EDIT: or, you can use sshfs-win on Windows and have your remote filesystem show up as a regular ol' drive, just like SMB. MacOS and Linux have sshfs, and I know there are GUIs wrapping sshfs on those platforms. I personally use sshfs at home and it's great (although no GUI wrapper, I'm a weirdo who doesn't use a graphical file manager at all).

Mango,

Oh I don’t have a computer right now. I got reamed by the law over a lie from a road rager and lost everything.

Badabinski,

:( I'm sorry to hear that. Well, for Android there's MaterialFiles, which is fully FLOSS and supports FTP, SFTP, and SMB. Not sure about iOS, but I imagine there are options there.

I hope that your journey through life becomes a little less rocky.

Mango,

Thanks for talking to me about all this.

Natanael,

If you set up port forwarding for file shares you must keep setting it up again for every new service.

If you set up a VPN once then you’re simply done. Every new service you set up is available directly.

Mango,

No middleman pls.

Natanael,

ZeroTier can be a fully self hosted VPN. You set up a server locally, port forward only the VPN service, and then everything else you run is accessible through it for the people you give access to.

Mango,

Who knows what else it does with that secret code.

Natanael,

They’re are multiple open source options like n2n

kevincox,
@kevincox@lemmy.ml avatar

You can set up SFTP with a password.

Or WebDAV isn’t that awful.

aksdb,

Have you considered publishing that as a book? (/s)

You are insane… in a good way. I love it. Fantastic read and I had to chuckle a few times.

Badabinski,

I'm glad that my grumpy migraine ramblings brought someone some joy!

lemmyreader,

In that case, I’d like to chime in and add NFS to this list. The often overlooked jewel of the glorious past days. /j

Mango,

So like… If I had a game installed on your computer, my computer could treat that game as if it’s local and load files over the Internet like it’s just reading my disk?

That is cool as fuck.

Decker108,

Wait, encryption counts as bullshit now? ;)

Mango,

FTP can be encrypted yo.

KillingTimeItself,

matrix, or at least interop standards for online communications. It’s such bullshit that you make a shitty chat app, and just because it’s free and relatively featured, become the single existing monopoly of chat applications.

Like idgaf as long as i can host a bridge between discord and matrix or some shit, and you technically can, but it’s a right pain in the ass to do so.

kevincox,
@kevincox@lemmy.ml avatar

Yup. Way too many people using different chat apps. I’ve bridged most of them but still annoying.

For business email is thankfully still pretty common. But some of them try to push you to one of the Facebook messengers.

I want an open widely used chat app ASAP.

KillingTimeItself,

and even with email, that’s still open. So not a huge concern, or at the very least standardized enough to make it easily interop.

But yeah, i would greatly appreciate anything that isn’t fucking discord.

hperrin,

I wish people used email for chat more. SMTP is actually a pretty great protocol for real time communication. People think of it as this old slow protocol, but that’s mostly because the big email providers make it slow. Gmail, by default, waits ten seconds before it even tries to send your message to the recipient’s server. And even then, most of them do a ridiculous amount of processing on your messages that it usually takes several seconds from the time it receives a message to the time it shows up in your account.

There’s a project called Delta Chat that makes email look and act like a chat app. If you have a competent email service, I think it’s better than texting. It doesn’t stomp on the images you send like SMS and Facebook do, everyone has it unlike all the proprietary services, and you can run your own server for it that interacts with everyone else’s servers.

Unfortunately, Google, Microsoft, etc all block you if you try to run your own server “to protect against spam”. Really, I’m convinced that’s just anticompetitive behavior. The fewer players are allowed to enter the email market, the less competition Gmail and Outlook will have.

As much as I like ProtonMail too, unfortunately their encryption models prevents it from working with Delta Chat. I’d love to see Proton make a compatible chat app that works with their service.

I made an email service called Port87 that I’m working on making compatible with Delta chat too. I’d love to see people using email the way it was originally meant to be used, to talk to each other, without being controlled by big businesses.

treadful,
@treadful@lemmy.zip avatar

SMTP is actually a pretty great protocol for real time communication.

remembers greylisting is a common thing

kevincox,
@kevincox@lemmy.ml avatar

greylisting will typically only be applied to people who you haven’t interacted before, so I don’t think it is a big deal. It would be similar to how many major chat apps hide away suspicious messages from new people in some “invites” section that is often hidden by default.

hperrin,

Yes, I mentioned that. That’s not a protocol issue, that’s a big business controls everything issue.

morrowind,
@morrowind@lemmy.ml avatar

The delay is there because email has no deletion support.

And a host of other shortcomings.

I’d rather we replaced email with matrix

hperrin, (edited )

If you’re relying on the remote server to delete something, you can’t trust it no matter what protocol you’re using.

For a regular email, the chance to undo might be fine, but for real time communication, it’s just an unnecessary road block.

Maybe if it was optional per recipient, or per conversation, or better yet, depending on the presence of a header, it might be fine. Gmail only supports all-on or all-off.

morrowind,
@morrowind@lemmy.ml avatar

If you’re relying on the remote server to delete something, you can’t trust it no matter what protocol you’re using.

I mean yeah I wouldn’t bet my life on it, but for the 99% of regular communication it’s fine. That’s no reason to not have it in the protocol and muck around with 10 second delays instead.

hperrin,

Oh, another awesome thing about email is that you can ensure that your address is always yours, even if you use an email service provider like Gmail. Any provider that supports custom domains will allow you to use your own domain for your address, then if you want to change your provider, you keep your address. So, since I own hperrin.com, I can use the address me@hperrin.com, and I know it’ll always be mine as long as I pay for that domain.

This is a much better model than anything else. Even on the fediverse, you can’t have your own address unless you run your own instance.

If your email service provider goes out of business or gets sold off (skiff.com, anyone?), as long as you’re on your own custom domain, your address is still yours.

I’m working on custom domains for Port87. It’s definitely a feature I think every email provider should offer.

dgriffith,

Yes, I shifted to my own domain after my default ISP of 20 years decided that email was just too hard, you know? They didn’t outright say it, they just started batch processing emails so that I’d get all my daily emails at around 2 am the next day. Super handy for time limited password reset emails!

A few hours reading a guide and setting up a $5/mo linode email server with SPF and dmarc, a few more hours transferring 20 years of IMAP mail from my old account to a folder, and a month or so of changing a few site contact emails over each day when they emailed something to my old account, and now I’ve got an email server on my own domain that is 10 times faster at sending/receiving mail than my old ISP ever was.

And now I can have amazon@mydomain.com and career@mydomain.com and random other disposable addresses so that when they are inevitably sold off for the $$$ I can just dump them and maintain a spam free inbox.

kevincox,
@kevincox@lemmy.ml avatar

SMTP is a terrible protocol. Text based for sending effectively binary data with complex header wrapping and “generate a random delimiter” framing. We really need a HTTP/2 of SMTP.

That being said I agree that it exists and works. The biggest blocker to more IM-style communication is largely the UI and user expectations. I have no problem having quick back-and-forths over email but most people don’t expect it.

hperrin,

Fair enough. Sending binary data over SMTP adds a lot of overhead, because it all has to be encoded. We should fix that.

kevincox,
@kevincox@lemmy.ml avatar

Honestly my biggest complaint is header wrapping. Technically you need to wrap lines at 998 bytes (not that any reasonable server actually cares). But in order to wrap a header you need to add spaces (because you can only break a line after whitespace). But where spaces are unimportant depends on each specific header. So you need to have custom wrapping rules for each header.

In practice no one does this. They just hope that headers naturally have spaces or break them in random locations (corrupting them) because the protocol was too stupid.

Binary protocols are just so much simpler. Give the length, then the data. Problem solved. Maybe we could even use a standard format for structured headers. But that would be harder to do while maintaining backwards compatibility.

MilitantVegan,

PGP/GPG. I would like to see the web of trust take off. Also I love the aesthetic for anything that’s been signed, and would like to see blog posts everywhere be nested by long blocks of random symbols.

KillingTimeItself,

key signing and web of trust is pretty cool but i’m somewhat opposed to it on a fundamental level. Let me decentralize my shit and mind my own business if you feel what i mean.

Anything that’s relatively centralized identity wise is not something i’m a huge fan of right off the hop.

69420,

Let me decentralize my shit…

Isn’t that why it’s a web of trust, and not a center of trust? I think you might be confusing that with public key infrastructure.

Also, you can’t decentralize your shit without a second party. That’s kind of the point.

KillingTimeItself,

Isn’t that why it’s a web of trust, and not a center of trust?

yes, but it’s still a trust, i don’t consider that to be fully decentralized. It serves a purpose don’t get me wrong, but i won’t be signing my online profiles using WoT keys anytime soon.

The web makes it decentralized, which is accurate, though i tend to use decentralize way more aggressively on a level local to me. I suppose it’s probably more dis-integrated, than anything. But whatever.

Natanael, (edited )

PGP has a bunch of limits (and I’m saying that as a cryptography nerd). We’ve learned a lot of things since the 90’s and the better solutions are specialized encryption protocols like MLS / Matrix (E2EE group messaging) and running all kinds of other protocols on top.

The portable identity part of PGP can be handled by something like DID documents which works more like Keybase used to do (depending on specific implementation) where your declare a list of supported protocols with public keys and accounts under your control, so people can still achieve the same effect of using a strong cryptographic identifier to communicate with you, but with forward secrecy supported by default and much lower risk of stuff like sidechannel attacks.

MilitantVegan,

Yeah but the aesthetic!

Shawdow194,
Shawdow194 avatar

RCS compatibility between iOS and Android operating systems

lemmyvore,

Google has used RCS as their latest attempt at entering the messenger market. I really don’t see why anybody else would want to adopt it under these circumstances. I mean Samsung did but Samsung is playing their own little paranoid game with Google, they don’t really give a crap about RCS.

Basically Google killed RCS. They will never be able to make anybody adopt it against their will in the EU, people will stick to what messenger services they’re already using. If they ever attempt to turn it on by default in their own app it will turn into a regulatory issue so fast their head will spin.

bloodfart,

I actually feel the opposite.

Rcs was designed from the ground up to be handled by the carrier in clear text like sms, it doesn’t incorporate encryption in any way and doesn’t do much at all to address the untrustworthy nature of carriers and law enforcement nowadays.

It’s like those two protocols started developing at the same time and only google kept extending rcs to keep some degree of feature parity with imessage.

If we had to ditch iMessage it ought to be for some third type, not for questionably secure rcs and what new bubble color can be used to indicate that someone’s using an unencrypted rcs server?

kevincox,
@kevincox@lemmy.ml avatar

I want us to stop using communication protocols that are tied to our connectivity providers. Let alone tied to a specific piece of hardware (SIM card).

“Telephone providers” should be just another ISP. And whatever I do over the network shouldn’t care if it is running on a mobile network or a landline fibre.

While we are at it let’s fuck off with this SIM shit. You don’t get to run code on my device. Give me an authentication key that I can use to connect to your network and then just transfer my packets. My device runs my code.

lurch,

i wish all the big players would agree on one of the many open chat and IM protocols. it’s like kindergarten where the toddlers don’t want to share toys

halm,
@halm@leminal.space avatar

Was it really back in 2009 that both Google and Facebook used XMPP compatible chats? Those were the days.

webjukebox,

I was the cool guy with all the chats in one place: Pidgin.

halm,
@halm@leminal.space avatar

We had the future in our hands but our corporate platform overlords made a wrong turn at Albuquerque.

kevincox,
@kevincox@lemmy.ml avatar

Facebook had an XMPP client API. It didn’t federate (and wasn’t really true XMPP resulting in many quirks).

darklamer,
@darklamer@lemmy.dbzer0.com avatar
hyperhopper,

Why should this be at the editor level? There should be a linter that applies all these stylistic formatting changes to all files automatically. If the developer’s own editing tools or personal workflow have a chance to introduce non-standard styles to the codebase, you have a deeper problem.

darklamer,
@darklamer@lemmy.dbzer0.com avatar

Why should this be at the editor level?

Because for every programming language there’ll be people using text editors, but you’ll never succeed in even creating code formatters for them all.

The greatness in this project is in aiming low and making things better through simple achievable goals.

kevincox,
@kevincox@lemmy.ml avatar

I want both. When I am typing code in my editor I want it to follow the styles of the project. Then when I run the linter/formatter it will fix the mistakes.

The last thing I want is to start a new if foo { statement and the indent is half of the indent of the if above. That would be too distracting.

lemmyreader,

XMPP

cyclohexane,

Why not matrix?

lemmyreader,

You’re going off-topic from the OP question :-) But to answer your new question : I do not trust Matrix enough when it comes to privacy. I know that this link is old but still. disroot.org/en/blog/matrix-closure

Then again I do not trust Signal that much either but sometimes compromises need to be made to get things done. With XMPP the end user can host their own server if they wish to, without meta data going to a centralized point. And video calls via XMPP and Conversations were a pleasure to use when I used it during the Covid-19 pandemic.

Blizzard,

Call me old fashioned, but I still call it Jabber.

lemmyreader,

🙂

kevincox,
@kevincox@lemmy.ml avatar

It really is a better name.

halm,
@halm@leminal.space avatar

I came here to say matrix but I’m not gonna lie. If XMPP had gotten the traction it deserved we wouldn’t need matrix.

RotatingParts,

RSS (RDF Site Summary or Really Simple Syndication) It is in use a fair amount, but it is usually buried. Many people don’t know it exists and because of that I am afraid it will one day go away.

I find it a great simple way to stay up to date across multiple web sites the way I want to (on my terms, not theirs) By the way, it works on Lemmy to :)

kevincox,
@kevincox@lemmy.ml avatar

Honestly there is rarely a blog I want to follow that doesn’t have it. I do think it would be great to have more readers using it so that it becomes more significant, but for my reading it is actually pretty great.

bjoern_tantau,
@bjoern_tantau@swg-empire.de avatar

RSS. It’s still around but slowly dying out. I feel like it only gets added to new websites because the programmers like it.

moreeni, (edited )

It’s seen its renneisance recently

folkrav,

How so? Outside very niche stuff or podcasts I just don’t seem to it used that often.

technom,

Most websites still use standard back ends with RSS support. Even static site generators also do it. The only difficulty is user discovery.

folkrav, (edited )

Yeah… It always being there hardly makes it a “renaissance”, no?

christophski,

Sadly so many rss feeds are just the first paragraph and not the whole article

BrianTheeBiscuiteer,

90% of the bullshit mass emails at my work could be an RSS feed.

bjoern_tantau,
@bjoern_tantau@swg-empire.de avatar

“THIS WHOLE MEETING COULD HAVE BEEN AN RSS FEED!”

thingsiplay,

I wish more websites would use RSS Feeds. :-(

mesamunefire,

Theres quite a few sites that still use it and existing ones in the Fediverse have it built in (which is really cool). But your right, the general public have no concept of having something download and queue up on a service rather than just going to the site. And the RSS clients are all over the place with quality…

Static_Rocket,
@Static_Rocket@lemmy.world avatar

WebSub (formerly PubSubHubbub). Should have been a proper replacement for RSS with push support instead of polling. Too bad the docs were awful and adopting it as an end user was so difficult that it never caught on.

morrowind,
@morrowind@lemmy.ml avatar

I still want something push based (without paying for those rss as a service)

smpl,
@smpl@discuss.tchncs.de avatar

It’s part of the RSS 2.0 standard. Of course it requires adoption by feed publishers.

rssCloud

mark,
@mark@programming.dev avatar

Oh neat! I didn’t know this existed. By any chance, do you know of any RSS readers that have implemented it?

kevincox,
@kevincox@lemmy.ml avatar

Literally nothing uses rssCloud. WebSub is what you want.

smpl,
@smpl@discuss.tchncs.de avatar

No I’m sorry, I pull my feeds manually using a barebones reader. I’m guessing your best bet is one of the web-based readers as it would require a client with a TCP port that’s reachable from the web. I have never seen a feed who provided the rssCloud feature though.

kevincox,
@kevincox@lemmy.ml avatar

I wouldn’t say that it never caught on. I run a feed reader and ~6% of feeds have WebSub. Most of these are probably wordpress.com blogs which include it by default.

YouTube also sort of supports it, but they don’t really follow the standard so I don’t think it counts.

But the nice thing about WebSub is that it is sort of an invisible upgrade to the existing feed (or any other HTTP URI) so it just works when blogs enable it.

Most major feed reader services support it. One problem is that you need a stable URL to receive the notifications. So it is hard to make work with client-side readers. But I don’t think there is really a way around this other than holding a connection open to every feed you follow. So I would say that it does its job well. I don’t really see a need to get to 100% adoption or whatever. If you have a simple static-site blog that updates every month or so I don’t think it is a big deal that it doesn’t support WebSub.

x3i,

Unified Push.

Unbelievable that we have to rely on Google and co for sth as essential as push messages! Even among the open source community, the adoption is surprisingly limited.

technom,

Nobody knows about unifiedpush. Last time I checked, their Linux dbus distributor also wasn’t ready. There has to be a unified push to get it adopted.

redlue,

I don’t know about unifiedpush but I’m going to look into it now.

Using firebase for push notifications always seemed a little sketchy to me.

kevincox,
@kevincox@lemmy.ml avatar

Fuck Unified Push. Just use the Web Push standard. www.rfc-editor.org/rfc/rfc8030

It is what is used for browser push messages, is already widely supported. Is compatible with existing push infrastructure and users and is end-to-end encrypted. IDK why Unified Push felt the need to create a new protocol when a perfectly good one already existed.

Although there is no “client side” spec. The Unified Push client side could be useful. But they should throw away their custom backend protocol and just use Web Push.

whotookkarl,
@whotookkarl@lemmy.world avatar

TeX. I was able to use it during school for some beautiful type setting and formatting but nobody I work with wants to use anything other than plain text or unfortunately more commonly binary wysiwyg editor formats. It’s frustrating and ugly.

redlue,

I support them and don’t support you.

Why overcomplicate things? If you want to work harder for nothing extra, great. But don’t put that on everyone else.

Norodix,

Binary wysiwyg is shit for versioning, I hate it. Plaintext is fine but limited. I like both markdown and tex.

JasonDJ,

I somewhat disagree.

WYSIWYG document editors are terrible at getting things exactly as you want.

If you’re a perfectionist, especially in presentation, it’s probably easier to adopt TeX than it is to get (Open)Office to do exactly what you want into.

redlue,

If you’re a perfectionist, especially in presentation

That’s the thing, most people don’t care because it doesn’t matter.

Imagine wasting their time to satisfy your OCD, lol.

Railison,

One way I do TeX now is a few templates (letter, memo, etc) and circulate files in PDF.

If you must use Word to circulate files, consider Pandoc as a way to get them out.

leopold,

What WYSIWYG binary formats have you been using? OpenDocument is zipped XML. OOXML is also zipped XML. RTF is plain text. Everything else is dead. RTF is too, actually.

aarroyoc,
@aarroyoc@lemuria.es avatar

IPv6. Lack of IPv4 addresses it’s a problem, specially in poorer countries. But still lots of servers and ISPs don’t support it natively. And what is worse. Lots of sysadmins don’t want to learn it.

BaldProphet,
BaldProphet avatar

IPv6 is great, but NAT is quite functional and is prolonging the demise of IPv4.

Alk,

My isp decided to put me behind a CGNAT and broke my access to my network from outside my network. Wanted to charge me $5 a month to get around it. It’s not easy to get around for a layman, but possible. More than anything it just pissed me off that I’d have to pay for something that 1 day ago was free.

SandroHc,
@SandroHc@lemmy.world avatar

How can you bypass CGNAT?

Alk,

Set up a reverse proxy on another machine (like one of those free oracle cloud things). I can’t go into detail because I don’t know exactly how. I think cloudflare also has options for that for free. Either way it’s annoying.

ChilledPeppers,

Cloudflare tunnel, and its alternatives, such as localXpose, altho the privacy is probably questionable, and a many of them require a domain.

ashley,

NAT is also good security wise. Personally I’d rather the inside of my network stay completely anonymous

frezik,

No. Stop spreading that myth. NAT does fuck all for security. If you want a border gateway, you can just have a border gateway.

KISSmyOSFeddit,

You can have that with ipv6, too.

cmnybo,

NAT is not for security, that’s what the firewall is for. Nobody can access your IPv6 network unless you allow access through the firewall.

ReversalHatchery,

I don’t think they were talking about access to the network.

technom,

“The inside of the network stay anonymous” sounds like they are talking about internet access to the internal network.

ReversalHatchery,

If computers connect to others through the internet, the IPv6 address can reveal how many computers there are on the local network, and if certain traffic to different destinations are coming from the same computer, but also if one of the computers has gone offline but then resumes from sleep/hibernation.
To me their comment means they want to avoid that, and I agree, I want to avoid that too. To fix these, I would need to configure NAT on my router for IPv6.

Yes IPv6 address privacy extensions help somewhat, but

  • computers won’t use a different v6 address for every distinct destination, they will just start using a new one from time to time
  • computers won’t stop using the old v6 address immediately after wakeup

With v4 addresses these did not really matter, because everything was being sent from the same public IP, and and outside observer could only see what a “network” is doing collectively. But with v6 an address identifies a computer, across websites/services. Even if it’s just for a "short’ time, even if the address is randomized.

frezik,

If you want privacy, you need some kind of VPN or onion routing. Even if everything you list were correct, the difference between IPv4 and 6 for privacy would be marginal.

ReversalHatchery,

I don’t think this is so black and white. I’m a regular tor user, but so often it’s not worth it to load webpages through a dial-up connection, and then there are the sites that block access for tor users for some reason.

Even if everything you list were correct

Which parts weren’t?

the difference between IPv4 and 6 for privacy would be marginal

I disagree

vzq,

Found the guy that does not want to learn IPv6!

lemmyvore,

You’re thinking of a firewall. NAT is just the thing that makes a connection appear to come from an IP on the internet when it’s really coming from your router, and it’s not needed with IPv6. But you would not see any difference with IPv6 without it.

dgriffith, (edited )

You’re thinking of a firewall. NAT is just the thing that makes a connection appear to come from…

That connection only “appears to come from” if I explicitly put a rule in my NAT table directing it to my computer behind the router doing the NAT-ing.

Otherwise all connections through NAT are started from internal->external network requests and the state table in NAT keeps track of which internal IP is talking to which external IP and directs traffic as necessary.

So OP is correct, it does apply a measure of security. Port scanning someone behind NAT isn’t possible, you just end up port scanning their crappy NAT router provided by their ISP unless they have specifically opened up some ports and directed them to their internal IP address.

Compare this to IPV6 where you get a slice of the public address space to place your devices in and they are all directly addressable. In that case your crappy ISP router also is a “proper” firewall. Strangely enough it usually is a “stateful” firewall with default deny-all rules that tracks network connections and looks and performs almost exactly like the NAT version, just without address translation.

domi,
@domi@lemmy.secnd.me avatar

So OP is correct, it does apply a measure of security. Port scanning someone behind NAT isn’t possible, you just end up port scanning their crappy NAT router provided by their ISP unless they have specifically opened up some ports and directed them to their internal IP address.

You end up just port scanning their crappy router on IPv6 as well because ports that are not opened are stuck at the firewall either way, no matter if you use IPv4 or IPv6.

Just because every device gets a public IP does not mean that IP is publicly accessible.

An advantage that IPv6 has against port scanning is the absurdly large network sizes. For example, my ISP gives me a /56 prefix, that is 4,722,366,482,869,645,213,696 IPv6 addresses. Good luck finding the used ones with the port open you need.

Even with just a /64 prefix you get 18,446,744,073,709,551,616 addresses, way outside the feasibility of port scanning.

KillingTimeItself,

Compare this to IPV6 where you get a slice of the public address space to place your devices in and they are all directly addressable. In that case your crappy ISP router also is a “proper” firewall. Strangely enough it usually is a “stateful” firewall with default deny-all rules that tracks network connections and looks and performs almost exactly like the NAT version, just without address translation.

realistically, it wouldnt surprise me if ISPs started NATing on residential IPV6 networks, just for the simplicity, but still allowed end users to assign their own IPs if they so pleased. Given the surge in shitty IOT devices, that’s probably a good thing for most people. Though a firewall would also accomplish this as well.

KillingTimeItself,

you know what is more secure? Not being connected to the internet.

KillingTimeItself,

NAT is functional as long as you like NAT, which im pretty sure nobody likes, so uh.

BaldProphet,
BaldProphet avatar

Plenty of people like NAT.

KillingTimeItself,

the only people that like nat are network admins, and ISPs.

Everyone else hates them. The rest don’t care, but they wouldn’t know a NAT if it hit them in the face.

Zer0_F0x,

Am sysadmin, can confirm I don’t wanna learn it.

KillingTimeItself,

not a sysadmin, but i admin a system or two, have yet to learn it myself, but will eventually learn it.

Nyanix,
@Nyanix@lemmy.ca avatar

Also a sysasmin, really don’t wanna learn it…or have to type it on the daily

vzq,

Lots of really large sites are horribly misconfigured. I had intermittent issues because one of the edge hosts in Netflix ‘s round robin dns did not do MTU discovery properly.

PlexSheep, (edited )

My university recently had Internet problems, where the DHCP only leased Out ipv6 addresses. For two days, we could all see which sites implemented ipv6 and which didn’t.

Many big corpo sites like GitHub or discord Apperently don’t. Small stuff like my personal website or suikagame.com do.

Vilian,

github is so stupid with that, it’s actually funny

Mythnubb,

That’s a fun little game there!

folkrav,

Say this to my very large Canadian ISP who still doesn’t support IPv6 for residential customers. Last I checked, adoption in Canada was still under 50%.

calcopiritus,

50%?? I fucking wish. In Spain we are at 5%. I finally got IPv6 in my phone this year, but I want it in my home, which is still only available as IPv4 even if they’re the same ISP.

SexualPolytope,
@SexualPolytope@lemmy.sdf.org avatar

LaTeX. As someone in academia, I absolutely love it. It has some issues like package incompatibility, but it’s far far better than anything else I’ve used. It’s basically ubiquitous in academia, and I wish it were the case everywhere else as well.

folkrav,

I honestly just use it for my resume with a template I found, so my knowledge is extremely basic, but I really do love the concept that I can “compile” and actually see the source of my document’s formatting.

technom,

It really needs to significantly improve its live update capability. Typst is more capable in that regard.

cyclohexane,

Is it practical outside of academia? I heard the learning curve is kinda big

zagaberoo,

Nope and yep. It’s an incredible tool, but it’s got a vim-sized learning curve to really leverage it plus other significant drawbacks. Still my beloved one-and-only when I can get away with it, but its a bit of a masochistic acquired taste for sure.

Template tweaking, as I imagine academia heavily relies on, is really the closest to practical it gets. You do still get beautiful results, it’s just hard to express yourself arbitrarily without really committing to the bit.

cyclohexane,

Outside of academia, would you say it still provides significant upside over markdown?

technom,

Markdown and LaTeX are meant for entirely different purposes. It’s somewhat analogous to HTML vs PDF. While it’s possible to write books with Markdown, it’s a vastly inferior solution compared to latex or typst (for fixed format docs like books).

embed_me,
@embed_me@programming.dev avatar

It’s got a vim-sized learning curve to really leverage it

As a regular vim user, I have to say. Vim makes sense after you put some effort into learning it. I can’t say the same about latex.

boredsquirrel,

What about Typst?

Urist,
@Urist@lemmy.ml avatar

The Typst compiler is open source. It is the open core of the web app and we will develop and maintain it in cooperation with the community

Try Typst now!

Create a free account to join the public beta.

Beta software marketing with “free accounts” and an open core compiler for a (probably) future paid web service tells me all I need to know.

Even though LaTeX has issues, not being an online service is not one of them.

boredsquirrel,

They host a proprietary service that does all the stuff, the compiler and spec are completely FOSS. So you need to create your own implementations, which is not hard.

I dont think they will close source the compiler. And thats basically everything thats needed?

I have 0 problems with people creating a fancy proprietary implementation to get people hooked. I will never use an online editor, but why care?

KillingTimeItself,

or you could also just make an open source wrapper for latex and call it a day.

Nothing needs to be closed source to get people to use it.

boredsquirrel,

And it isnt :D the compiler produces PDFs which can be read with anything. The spec is open so you can write the code with any editor.

Just needs integration, will see if I can add the syntax highlighting to Kate

KillingTimeItself,

i suppose that’s the case, but if you ever partially open source something, i think you’re probably trying a little too hard.

Urist,
@Urist@lemmy.ml avatar

Learning LaTeX and working around its quirks seems like a much better time investment than sidegrading to something that lives on premises given by a proprietary commercial project. If someone saw LaTeX and said “I want to make some version of this that is better”, without alterior motives, they would probably just work on improving LaTeX (which a whole lot of people do).

Fancy does not mean better, and often is in many ways worse than plain old boring.

boredsquirrel,

You know Overleaf is a thing right?

Many projects need to be rewritten from scratch I think. But I also think an easier markup language for LaTeX could be possible, keeping all the nice templates etc.

Urist,
@Urist@lemmy.ml avatar

From the LaTeX project:

The experience gained from the production and maintenance of LaTeX2e (the version you have been using for many years) had a major influence on our goals for future development and on new code which is now integrated into LaTeX.

A while ago we made the decision to drop the idea of a separate LaTeX3 format that would exist in parallel to LaTeX2e, but instead decided to gradually modernize LaTeX to keep it competitive in today’s world while maintaining compatibility methods for older documents.

I think this decision was pretty much a good one.

Overleaf does not modernize LaTeX in meaningful ways. It only adds cloud functionality and glossy appearance that you can get on dedicated editors anyways.

boredsquirrel,

No, but Overleaf is just a proprietary fancy editor like the Typst one. Meanwhile typst is just as usable for building editor too.

I dont see any arguments against typst really. I am using Markdown all time and find it best, but lacking. Then LaTeX, honestly I dont want to learn as it must be a pain to write.

Now in typst, you can write academic papers etc just as well. All you need is free software, with good backing, modern tooling (rust, cargo), thus it runs everywhere. Its pretty cool!

Urist,
@Urist@lemmy.ml avatar

Overleaf are not benefactors that develop LaTeX for economic gains, unlike the situation with Typst that rely on it (to my knowledge). LaTeX is also cross platform, supported in tons of editors and can easily be converted to other formats with pandoc. It is also somewhat supported in other formats using implementations such as KaTeX for Markdown and Mathjax in HTML due to being the defacto standard for math typesetting.

Writing papers in LaTeX is a joy, not a pain. The end result is also a beautifully typeset document rivalled by none.

oldfart,

I wrote my masters in LaTeX and while I appreciate the structuredness and the fact I could use vim, it was so quirky. Having to spend half an hour to fix a non obvious compile error, more than once, was a big distractor. I’m sure it gets better when you use it more but I don’t think I have ever used it since. I’m not in academia and I don’t need to solve compile problems when creating an invoice or writing a letter to local government.

rottingleaf,

For me it’s more pleasant than editing formulae in LO, but still took a lot of time.

olafurp,

I personally feel like it should be a standard extended markdown that allows latex code.

halm,
@halm@leminal.space avatar

It’s basically ubiquitous in academia

You mean STEM. In the humanities we do just fine without, tyvm.

SexualPolytope,
@SexualPolytope@lemmy.sdf.org avatar

IDK dude. My sister is doing master’s in Philosophy. She uses LaTeX, and so do most others in her batch.

KillingTimeItself,

ok well to be fair philosophers will also fuck shit up just to make a point. So i’m not sure how fair that is.

embed_me,
@embed_me@programming.dev avatar

It’s not a standard but still its an interesting software so I’ll post this here:

Joking aside, I love and hate it. Its paradigm is almost like using the C preprocessor to build a really awkward Turing-machine. TeX/LaTeX does a great job of what it was intended to do; it applies high quality typesetting rules to complex material and produces really good results. I love the output I can get with it and I will be eternally grateful that Donald Knuth decided to tackle this problem. And despite my complaints below, that gratitude is genuine. Being able to redefine something in a context-sensitive way, or to be able to rely on semantics to produce spacing appropriate to an operator vs a variable etc; these are beautiful things.

The problem is, at least once a day I’m left wishing I could just write a callable routine in a normal language with variables, types, arrays, loops and so on. You can implement all those things in TeX, but TeX doesn’t have a normal notion of strings, numbers or arrays, so it is rare that you can do a complicated thing in an efficient way, with readable code. So as a language, TeX frequently leads to cargo-cult programming. I’m not aware that you can invoke reflection after a page is output, to see what decisions on glue and breaks were made; but at the same time you can’t conditionally include something that is dependent on those decisions, since the decision will depend on what is included. This leads to some horrible conditionals combined with compiling twice, and the results are not always deterministic. Sometimes I find it’s quicker to work around things like that by writing an external program that modifies the resulting PDF output, but that seems perverse.

At the same time, there’s really nothing else out there that comes close to doing what LaTeX does, and if you have the patience, the quality of documents it can produce is essentially unbounded. The legacy of encodings, category codes, parameter limits, stack limits etc. just makes it very hard for package writers, and consumes a great deal of time for a lot of people. But maybe I am picky about things that a saner person would just live with.

A lot of very talented people have written a lot of very complex packages to save the user from these esoteric details, and as a result LaTeX is alive and well, and 99% of the time you can get the results you want, using off-the-shelf parts. The remaining 1% of the time, getting the result you want requires a level of expertise that is unreasonable to expect of users. (For comparison, I wrote an optimising C compiler and generally found it far easier to make that work as expected, than some of the things I’ve tried, and failed, to do properly in LaTeX. I now have a rule; if getting some weird alignment to work takes me more than an hour, I just fake it with a postscript file, an image, or write an external program to generate it longhand, in order to save my sanity.)

I think (and certainly hope) that LaTeX is here to stay, in much the same way that C and assembly language are. As time moves forward I think we’ll see more and more abstractions and fewer people dealing with the internals. But I will be forever grateful to the people who are experts in TeX, and who keep providing us with incredible packages.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • linux@lemmy.ml
  • ngwrru68w68
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • InstantRegret
  • GTA5RPClips
  • Youngstown
  • everett
  • slotface
  • rosin
  • osvaldo12
  • mdbf
  • kavyap
  • cubers
  • megavids
  • modclub
  • normalnudes
  • tester
  • khanakhh
  • Durango
  • ethstaker
  • tacticalgear
  • Leos
  • provamag3
  • anitta
  • cisconetworking
  • JUstTest
  • lostlight
  • All magazines