@TCB13@lemmy.world
@TCB13@lemmy.world avatar

TCB13

@TCB13@lemmy.world

This profile is from a federated server and may be incomplete. Browse more on the original instance.

TCB13,
@TCB13@lemmy.world avatar

Well MS being anti competitive as usual. Side note, I like Tuta very much, finally an independent provider, but I would never use it as they don’t provide IMAP/SMTP.

TCB13,
@TCB13@lemmy.world avatar

Microsoft in December 2025: “We’ll continue to provide important security updates for Windows 10 customers until 2028.”

TCB13,
@TCB13@lemmy.world avatar

Naa they’ll make it free after a few months, better have a percentage of users running an old version of Windows that is safe than having to deal with the blowback of zero days pilling around.

Is the Linux Foundation Certified System Admin (LFCS) worth it?

I’ve been a software engineer for 10 years now but want to work with Linux more in a professional setting (not to mention the number of layoffs in the the dev industry has me thinking a backup plan might be a good idea). I have been using Linux exclusively on my personal machine for about 15 years now so I’m not too worried...

TCB13,
@TCB13@lemmy.world avatar

In some places people don’t even know what LFCS is. Related: lemmy.world/comment/571106

TCB13,
@TCB13@lemmy.world avatar

Docker is questionable open-source and depends on Dockerhub that isn’t open.

TCB13,
@TCB13@lemmy.world avatar

No, but that’s what everyone ends up using and the thing is designed to work really well with dockerhub and not so much with others.

TCB13,
@TCB13@lemmy.world avatar

freedns.afraid.org free, reliable and isn’t run for profit.

TCB13,
@TCB13@lemmy.world avatar

Oh I know he does, the project has almost 3x times the money in donations/premium features than what is required to run it. however it was never about making profit nor he will sell out to a bigger company and suddenly shutdown all free services. Actually I hope he continuous to make a profit and increase it because he deserves it.

TCB13,
@TCB13@lemmy.world avatar

… that may change at any time.

TCB13,
@TCB13@lemmy.world avatar

Well, unfortunately we can’t escape our ISP and a DNS provider.

TCB13,
@TCB13@lemmy.world avatar

Assuming you’ve a static IP and you’re irresponsible :P

TCB13,
@TCB13@lemmy.world avatar

True, but that goes back to the irony of “I want to selfhost, and therefore I need a service provider…”, in this case HE. And won’t take of the IPv4 issue, we can’t just assume every network we use to connect to a home setup will be IPv6 capable. At that point you can just pick a Cloudflare tunnel and have it all working.

TCB13,
@TCB13@lemmy.world avatar

Yes ISPs do assign IPv6 blocks via Prefix Delegation, the thing is that Prefix Delegation is done over DHCP. They’ll assign a block and if your router/device is restarts they’ll just give you a new prefix. In some even more annoying cases you can even get a new prefix whenever the lease expires.

TCB13,
@TCB13@lemmy.world avatar

Obviously you should pick a domain that is run by josh (the guy that runs freedns.afraid.org) such as the mooo.com and you won’t have issues. freedns.afraid.org/domain/registry/

TCB13,
@TCB13@lemmy.world avatar

I never had issues with josh domains, maybe contact him?

TCB13,
@TCB13@lemmy.world avatar

Neither, rock solid Debian + flatpak for the latest software.

TCB13,
@TCB13@lemmy.world avatar

I’ve already considered Debian, but… I dunno, this isn’t what I’d call the most logical reason, but I just kinda don’t like it as my desktop OS. I’d use Debian over basically anything else for a server, but as a desktop OS I don’t like the vibe.

I was on the same boat as you are, flatpak essentially made it all perfect.

TCB13,
@TCB13@lemmy.world avatar

Fair enough.

TCB13, (edited )
@TCB13@lemmy.world avatar

LXD/Incus provides a management and automation layer that really makes things work smoothly. With Incus you can create clusters, download, manage and create OS images, run backups and restores, bootstrap things with cloud-init, move containers and VMs between servers (even live sometimes) and those are just a few things you can do with it and not with pure KVM/libvirt.

Another big advantage is the fact that it provides a unified experience to deal with both containers and VMs, no need to learn two different tools / APIs as the same commands and options will be used to manage both. Even profiles defining storage, network resources and other policies can be shared and applied across both containers and VMs.

Incus isn’t about replacing existing virtualization techniques such as QEMU, KVM and libvirt, it is about augmenting them so they become easier to manage at scale and overall more efficient. It plays on the land of, let’s say, Proxmox and I can guarantee you that most people running it today will eventually move to Incus and never look back. It woks way better, true open-source, no bugs, no BS licenses and way less overhead.

TCB13, (edited )
@TCB13@lemmy.world avatar

I was planning to install Proxmox for my homelab but didn’t like that it was a whole distro, which shipped with an ancient kernel…

My issue with Proxmox isn’t that it ships with an old kernel is the state of that kernel, it is so mangled and twisted that they shouldn’t even be calling it a Linux kernel. Also their management daemons and other internal shenanigans will delay your boot and crash your systems under certain circunstances.

For LXD you’ve a couple of options:

  1. Debian 12 with LXD/LXC provided from their repositories;
  2. Debian 12 with LXD/LCX provided from snap;
  3. Ubuntu with snap.

In the first case you’ll get a very clean system with a very stable LXD 5.0.2 LTS, it works really well however it doesn’t provide a WebUI. If you go with a the Snap options you’ll get LXD-UI after version 5.14.

Personally I was running LXD from snap since Debian 10, and moved to LXD repository under Debian 12 because I don’t care about the WebUI and I care about having clean systems… but I can see how some people, particularly those coming from Proxmox, would like the UI.

Side note: it should be possible to run the WebUI without snap and use it to control a LXD 5.0.2 LTS cluster but as I don’t need it I never spent time on it. :)

TCB13,
@TCB13@lemmy.world avatar

Just be aware you can lose your data. It is really bad with long file names and folders with a large number of files, there are multiple reports online about people losing their data. I personally have experienced this with large file names and once an entire vault that suddenly couldn’t be open.

TCB13, (edited )
@TCB13@lemmy.world avatar

Here’s a table with multiple solutions and comparisons: www.cryfs.org/comparison

https://lemmy.world/pictrs/image/930823e6-0012-4aa0-ae50-458ae0345924.png

Just be aware that some solutions like gocryptfs are provided on a user-space filesystem (Fuse). This has a very low performance and most importantly if you require inotify on the decrypted data for some application then it won’t be available. In short inotify is what allows apps to watch a filesystem for changes and act accordingly in real time.

TCB13,
@TCB13@lemmy.world avatar

Better than nothing. :)

goatsarah, to selfhosted en-gb

Anyone else using Mac minis as VM hosts for self hosting? My Friendica server is a Linux VM on a Mac Mini in my living room. The VM is bound to a VLAN tagged network interface so it’s completely firewalled off from the rest of my network. Also got a second Linux VM on the same box for hosting local stuff on my main VLAN (HomeBridge/etc).

I feel like they’re really nice platforms for this, if not the cheapest. Cheaper than one might think though; I specced up an equivalent NUC and there wasn’t a lot of difference in price, and the M2 is really fast.

TCB13,
@TCB13@lemmy.world avatar

I specced up an equivalent NUC and there wasn’t a lot of difference in price, and the M2 is really fast.

How come? Second hand HP Mini should be cheaper…

TCB13, (edited )
@TCB13@lemmy.world avatar

So… you’re looking for communism in the form of web hosting? 😜

TCB13,
@TCB13@lemmy.world avatar

😂 😂 😂

TCB13,
@TCB13@lemmy.world avatar

Written in go, very small and portable: github.com/FiloSottile/mkcert. There’s also step-ca, bigger and uses ACME to deploy certificates, never used it tho.

Just be awake of the risks involved with running your own CA.

TCB13,
@TCB13@lemmy.world avatar

could you tell me why one would run this over plain OpenSSL with automation?

Those projects essentially are the automation…

what risks would I run running a private CA? I’d love to know!

…stackexchange.com/…/what-are-the-risks-of-instal…

More or less you’re adding a root certificate to your systems that will effectively accept any certificate issues with your CA’s key. If your PK gets stolen somehow and you don’t notice it, someone might be issuing certificates that are valid for those machines. Also real CA’s also have ways to revoke certificates that are checked by browsers (OCSP and CRLs), they may employ other techniques such as cross signing and chains of trust. All those make it so a compromised certificate is revoked and not trusted by anyone after the fact.

TCB13,
@TCB13@lemmy.world avatar

Why are you pushing for your own CA in the first place?

TCB13,
@TCB13@lemmy.world avatar

Okay that’s fair but if your only concern is about “I do not want any public CA to know the domains and subdomains I use” you get around that.

Let’s Encrypt now allows for wildcard so you can probably do something like *.network.example.org and have an SSL certificate that will cover any subdomain under network.example.org (eg. host1.network.example.org). Or even better, get a wildcard like *.example.org and you’ll be done for everything.

I’m just suggesting this alternative because it would make your life way easier and potentially more secure without actually revealing internal subdomains to the CA.

Another option is to just issue certificates without a CA and accept them one at the time on each device. This won’t expose you to a possibly stolen CA PK and you’ll get notified if previously the accepted certificate of some host changes.


<span style="color:#323232;">openssl req -x509 -nodes -newkey rsa:2048 
</span><span style="color:#323232;">-subj "/CN=$DOMAIN_BASE/O=$ORG_NAME/OU=$ORG_UNIT_NAME/C=$COUNTRY" 
</span><span style="color:#323232;">-keyout $DOMAIN_BASE.key -out $DOMAIN_BASE.crt -days $OPT_days "${ALT_NAMES[@]}"
</span>
TCB13,
@TCB13@lemmy.world avatar

It makes me deeply uncomfortable to use a public domain and get public certificates for something inherently so private

You can obviously run your own CA, great exercise but why? What really makes you that uncomfortable? Once you go with the wildcard nobody will know about your internal hosts or whatever. Even if the domain is taken down, you’re offline or wtv your local DNS server will still be able to serve replies to those internal subdomains. You don’t need to publish those subdomains (A records) in a public DNS server, just on your own internal DNS server.

I guess if you rally want to take the CA route those tools I provided before are the best option. Simply issuing a certificate (without a CA) and allowing it on a browser might also work for you - less risks of stolen PK as described.

I hope the links and tips helped you in some way.

TCB13,
@TCB13@lemmy.world avatar

and it was still a bit of a pain to use. It seems like many others feel the same frustration, so I know I’m not alone. I often wonder how some other people claim they have no issues with Nextcloud, but hey, good for them!

That has been my experience, even on high end hardware. It just doesn’t get better, NextCloud is a joke full of bugs and issues and it won’t get anywhere unless the people running the project decide to actually do thing properly instead of going with the “next cool thing” at every opportunity.

Here is a test I did with a AMD Ryzen 7 5700X + 32 GB of RAM: lemmy.world/comment/346174

I spent weeks researching and trying to tweak things and at the end of the day NC always performs poorly. Most of the issues seem to be related to the poorly implemente WebUI but the desktop app also has issues with large folders. Also tried the docker version, the “all in one” similar results it simply doesn’t cut it.

My experience with NC’s Webmail: lemmy.world/comment/5490189

I can’t help but wonder why so many people have been raving about Nextcloud all these years when ownCloud performs so well right out of the box. I’d love to hear about your experience and the services you use. Share your thoughts!

I believe the people who say they don’t have issues with it aren’t just using it, after all you can’t refute screenshots like the ones on the last link. This kinda looks a lot like the Linux Desktop Delusion, people say it can be everything to everyone and that things are just great while it fails at the most basic tasks a regular user might attempt. Since we’re on the delusional land let me link to this about LibreOffice with pictures being considered “good enough for most paperwork with good MS-Office compatibility”.

TCB13,
@TCB13@lemmy.world avatar

I dont understand why nextcloud is so slow. I tried it out recently and its just so slow to upload files.

Because nobody on that team knows how to design and code software.

TCB13,
@TCB13@lemmy.world avatar

Or not… lemmy.world/comment/346174

The AIO is equally a crappy solution that doesn’t work properly, like everything else in NC.

TCB13,
@TCB13@lemmy.world avatar

@Tiritibambix read the last part of my comment here. :)

In short, a mix of Dovecot, Postfix, Syncthing, FileBrowser, WebDAV, Baikal, RoundCube (with Kolab plugins) and deployments to machines via Ansible. I also plan to integrate ejabberd, converse.js or Jitsi as a chat/call solution as soon as I have the time.

TCB13, (edited )
@TCB13@lemmy.world avatar

NC webmail is unusable. We have to pretend it doesn’t exist. Even with a completely empty IMAP server it takes 30 seconds to load

Did you ever try the single sign-on option that allows users to login to NextCloud using their IMAP credentials? After spending some time with it you’ll find it to be yet another half broken feature: github.com/nextcloud/mail/issues/2698 (see my reply bellow).

Roundcube is 1000x faster and has no cache at all. Can’t they just peek the source code?

Roundcube Open-Source Webmail Software Merges With Nextcloud … So, what should we expect now? To have RC as NextCloud’s default e-mail interface OR to get RC filled with mindless bugs and crappy features/decisions? Most likely the latter as NC’s “management” is what it is.

My second question about this merge is what is gonna happen with the Kolab guys (kolab.org / kolabnow.com) as they’ve been the ones actually “paying the bill” and investing serious development time into RoundCube and into useful plugins such as CardDAV and CalDAV that are actually better than anything NextCloud has done to this day. Their funding comes from their e-mail hosting service that is somehow in competition with NextCloud. Around 2006 Kolab also raised more than $100k USD to develop RoundCube so… that’s the kind of investment they’ve been working under.

Like nextcloud maps. In their blog they wrote a post over one year ago describing it as the next big thing after sliced bread.

Another joke by NextCloud.

TCB13, (edited )
@TCB13@lemmy.world avatar

Xfce works better everywhere and with everything, however it falls to the same pitfall that KDE has, eventually you’ll require some libadwaita application, flatpak and whatnot and then you’ll end up with a Frankenstein system half Xfce half GNOME components and themes that don’t apply to all apps equally. :(

TCB13,
@TCB13@lemmy.world avatar

“Docker is safer” sure.

TCB13,
@TCB13@lemmy.world avatar

No, but it only happens because this tech exists in the first place and things got way more cumbersome and way overcomplicated than they should be.

TCB13,
@TCB13@lemmy.world avatar

This is just a bad practice that was popularized by CI/CD solutions and later on by containers. I’m not saying containers aren’t good for security, what I’m saying is that they’re misused and abused and that images shouldn’t even be a thing. Isolation is great, blindingly trusting images made by someone and/or having people that don’t have any basic knowledge of security nor infrastructure suddenly being able to deploy complex solutions with a click ends up in situations like this.

TCB13,
@TCB13@lemmy.world avatar

My point was that “random deb” and/or “random web application” are way less likely to come with unsafe default ENV based configuration files and usually go with the config files securely stored in system directories with the proper permissions enforced during installation or simple .php config files that won’t get exposed and that will require the user to configure in a proper way (like WordPress does by providing wp-config-sample.php but not the final wp-config.php file required to run it). Those are the solutions people used before the containerization hype and things were just fine.

My second point: containers “lowered the bar”, allowing for almost anyone to be able to deploy complex solutions and this was / is bound to be a disaster. No matter how safe Docker and others become we can’t just expect people who know almost nothing about computers and networking to be able to safely deploy things. Even the ones that know a lot, like developers, sometimes use Docker to deploy things they wouldn’t be able to deploy otherwise and fall to the pitfalls of not understanding networking and computer security.

In any case, I would not expose such application outside of a VPN, which is a blanket security practice that most selhosters should do for most of their services…

Well, me too, however I understand that some people might want to expose it publicly because they might want to access their instances from public or work machines where they can’t install a VPN. For those cases I would recommend 2FA and only allowing specific IP addresses or ranges access to the thing - the office static IP, their ISP or eventually only the user’s country.

TCB13,
@TCB13@lemmy.world avatar

Implemented in python with the initial goal of “bridging every chat platform in existence” is just bound to be a disaster.

Just like Chia, who on their right minds would do those things in Python.

TCB13, (edited )
@TCB13@lemmy.world avatar

It’s as if a handwritten signature, even in PNG form, has a magical superpower to make a document authentic. A bit like the security theater at entrances to buildings and transport.

While Germany cards doesn’t seem to have a digital / smartcard component, French ones do. In Portugal and Spain at least you’re required to sign digital documents with your identity card, using a smartcard reader + a small utility app provided by the gov. Only those have legal value and this is enforced. Scanned handwritten signatures have zero value, and I know this also applied for other EU countries.

TCB13,
@TCB13@lemmy.world avatar

A government doesn’t need to take away your papers to deny you its services.

Yes, people just need to be dumb enough to vote the typical half communist and half socialist parties to power and they’ll take care of ruining public services for everyone in equal measure. :)

TCB13,
@TCB13@lemmy.world avatar

I wasn’t saying the opposite at all.

TCB13,
@TCB13@lemmy.world avatar

If you don’t setup and monitor things properly everything fails and keep going in an unpredictable state - even a software RAID.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • morbius
  • Kemonomimi
  • osvaldo12
  • NeutralPolitics
  • Durango
  • everett
  • rhentai
  • oldschoolgamer
  • Youngstown
  • InstantRegret
  • DreamBathrooms
  • tacticalgear
  • slotface
  • Egalitarianism
  • kopitiam
  • GTA5RPClips
  • normalnudes
  • smallboobs
  • OmnivoreApp
  • cisconetworking
  • TeamSpeak
  • cubers
  • ethstaker
  • lostlight
  • Leos
  • modclub
  • tester
  • relationshipadvice
  • All magazines