Started in mid November and despite repeated requests from Tuta(nota) and reassurances from MS, it’s still happening and MS have gone silent on the subject.
Well MS being anti competitive as usual. Side note, I like Tuta very much, finally an independent provider, but I would never use it as they don’t provide IMAP/SMTP.
Naa they’ll make it free after a few months, better have a percentage of users running an old version of Windows that is safe than having to deal with the blowback of zero days pilling around.
I’ve been a software engineer for 10 years now but want to work with Linux more in a professional setting (not to mention the number of layoffs in the the dev industry has me thinking a backup plan might be a good idea). I have been using Linux exclusively on my personal machine for about 15 years now so I’m not too worried...
Hi, started self hosting quite a few things and would like a domain to use for when I’m out and about instead of having to remember what my IP is currently. What are the newest providers of services that can accomplish this?
Oh I know he does, the project has almost 3x times the money in donations/premium features than what is required to run it. however it was never about making profit nor he will sell out to a bigger company and suddenly shutdown all free services. Actually I hope he continuous to make a profit and increase it because he deserves it.
True, but that goes back to the irony of “I want to selfhost, and therefore I need a service provider…”, in this case HE. And won’t take of the IPv4 issue, we can’t just assume every network we use to connect to a home setup will be IPv6 capable. At that point you can just pick a Cloudflare tunnel and have it all working.
Yes ISPs do assign IPv6 blocks via Prefix Delegation, the thing is that Prefix Delegation is done over DHCP. They’ll assign a block and if your router/device is restarts they’ll just give you a new prefix. In some even more annoying cases you can even get a new prefix whenever the lease expires.
Obviously you should pick a domain that is run by josh (the guy that runs freedns.afraid.org) such as the mooo.com and you won’t have issues. freedns.afraid.org/domain/registry/
I’ve been here a week ago already asking if Arch would be fine for a laptop used for university, as stability is a notable factor in that and I’m already using EndeavourOS at home, but now I’m curious about something else too - what about Arch vs NixOS?...
I’ve already considered Debian, but… I dunno, this isn’t what I’d call the most logical reason, but I just kinda don’t like it as my desktop OS. I’d use Debian over basically anything else for a server, but as a desktop OS I don’t like the vibe.
I was on the same boat as you are, flatpak essentially made it all perfect.
LXD/Incus provides a management and automation layer that really makes things work smoothly. With Incus you can create clusters, download, manage and create OS images, run backups and restores, bootstrap things with cloud-init, move containers and VMs between servers (even live sometimes) and those are just a few things you can do with it and not with pure KVM/libvirt.
Another big advantage is the fact that it provides a unified experience to deal with both containers and VMs, no need to learn two different tools / APIs as the same commands and options will be used to manage both. Even profiles defining storage, network resources and other policies can be shared and applied across both containers and VMs.
Incus isn’t about replacing existing virtualization techniques such as QEMU, KVM and libvirt, it is about augmenting them so they become easier to manage at scale and overall more efficient. It plays on the land of, let’s say, Proxmox and I can guarantee you that most people running it today will eventually move to Incus and never look back. It woks way better, true open-source, no bugs, no BS licenses and way less overhead.
I was planning to install Proxmox for my homelab but didn’t like that it was a whole distro, which shipped with an ancient kernel…
My issue with Proxmox isn’t that it ships with an old kernel is the state of that kernel, it is so mangled and twisted that they shouldn’t even be calling it a Linux kernel. Also their management daemons and other internal shenanigans will delay your boot and crash your systems under certain circunstances.
For LXD you’ve a couple of options:
Debian 12 with LXD/LXC provided from their repositories;
Debian 12 with LXD/LCX provided from snap;
Ubuntu with snap.
In the first case you’ll get a very clean system with a very stable LXD 5.0.2 LTS, it works really well however it doesn’t provide a WebUI. If you go with a the Snap options you’ll get LXD-UI after version 5.14.
Personally I was running LXD from snap since Debian 10, and moved to LXD repository under Debian 12 because I don’t care about the WebUI and I care about having clean systems… but I can see how some people, particularly those coming from Proxmox, would like the UI.
Side note: it should be possible to run the WebUI without snap and use it to control a LXD 5.0.2 LTS cluster but as I don’t need it I never spent time on it. :)
Just be aware you can lose your data. It is really bad with long file names and folders with a large number of files, there are multiple reports online about people losing their data. I personally have experienced this with large file names and once an entire vault that suddenly couldn’t be open.
Just be aware that some solutions like gocryptfs are provided on a user-space filesystem (Fuse). This has a very low performance and most importantly if you require inotify on the decrypted data for some application then it won’t be available. In short inotify is what allows apps to watch a filesystem for changes and act accordingly in real time.
Anyone else using Mac minis as VM hosts for self hosting? My Friendica server is a Linux VM on a Mac Mini in my living room. The VM is bound to a VLAN tagged network interface so it’s completely firewalled off from the rest of my network. Also got a second Linux VM on the same box for hosting local stuff on my main VLAN (HomeBridge/etc).
I feel like they’re really nice platforms for this, if not the cheapest. Cheaper than one might think though; I specced up an equivalent NUC and there wasn’t a lot of difference in price, and the M2 is really fast.
Written in go, very small and portable: github.com/FiloSottile/mkcert. There’s also step-ca, bigger and uses ACME to deploy certificates, never used it tho.
Just be awake of the risks involved with running your own CA.
More or less you’re adding a root certificate to your systems that will effectively accept any certificate issues with your CA’s key. If your PK gets stolen somehow and you don’t notice it, someone might be issuing certificates that are valid for those machines. Also real CA’s also have ways to revoke certificates that are checked by browsers (OCSP and CRLs), they may employ other techniques such as cross signing and chains of trust. All those make it so a compromised certificate is revoked and not trusted by anyone after the fact.
Okay that’s fair but if your only concern is about “I do not want any public CA to know the domains and subdomains I use” you get around that.
Let’s Encrypt now allows for wildcard so you can probably do something like *.network.example.org and have an SSL certificate that will cover any subdomain under network.example.org (eg. host1.network.example.org). Or even better, get a wildcard like *.example.org and you’ll be done for everything.
I’m just suggesting this alternative because it would make your life way easier and potentially more secure without actually revealing internal subdomains to the CA.
Another option is to just issue certificates without a CA and accept them one at the time on each device. This won’t expose you to a possibly stolen CA PK and you’ll get notified if previously the accepted certificate of some host changes.
It makes me deeply uncomfortable to use a public domain and get public certificates for something inherently so private
You can obviously run your own CA, great exercise but why? What really makes you that uncomfortable? Once you go with the wildcard nobody will know about your internal hosts or whatever. Even if the domain is taken down, you’re offline or wtv your local DNS server will still be able to serve replies to those internal subdomains. You don’t need to publish those subdomains (A records) in a public DNS server, just on your own internal DNS server.
I guess if you rally want to take the CA route those tools I provided before are the best option. Simply issuing a certificate (without a CA) and allowing it on a browser might also work for you - less risks of stolen PK as described.
and it was still a bit of a pain to use. It seems like many others feel the same frustration, so I know I’m not alone. I often wonder how some other people claim they have no issues with Nextcloud, but hey, good for them!
That has been my experience, even on high end hardware. It just doesn’t get better, NextCloud is a joke full of bugs and issues and it won’t get anywhere unless the people running the project decide to actually do thing properly instead of going with the “next cool thing” at every opportunity.
I spent weeks researching and trying to tweak things and at the end of the day NC always performs poorly. Most of the issues seem to be related to the poorly implemente WebUI but the desktop app also has issues with large folders. Also tried the docker version, the “all in one” similar results it simply doesn’t cut it.
I can’t help but wonder why so many people have been raving about Nextcloud all these years when ownCloud performs so well right out of the box. I’d love to hear about your experience and the services you use. Share your thoughts!
I believe the people who say they don’t have issues with it aren’t just using it, after all you can’t refute screenshots like the ones on the last link. This kinda looks a lot like the Linux Desktop Delusion, people say it can be everything to everyone and that things are just great while it fails at the most basic tasks a regular user might attempt. Since we’re on the delusional land let me link to this about LibreOffice with pictures being considered “good enough for most paperwork with good MS-Office compatibility”.
In short, a mix of Dovecot, Postfix, Syncthing, FileBrowser, WebDAV, Baikal, RoundCube (with Kolab plugins) and deployments to machines via Ansible. I also plan to integrate ejabberd, converse.js or Jitsi as a chat/call solution as soon as I have the time.
NC webmail is unusable. We have to pretend it doesn’t exist. Even with a completely empty IMAP server it takes 30 seconds to load
Did you ever try the single sign-on option that allows users to login to NextCloud using their IMAP credentials? After spending some time with it you’ll find it to be yet another half broken feature: github.com/nextcloud/mail/issues/2698 (see my reply bellow).
Roundcube is 1000x faster and has no cache at all. Can’t they just peek the source code?
Roundcube Open-Source Webmail Software Merges With Nextcloud … So, what should we expect now? To have RC as NextCloud’s default e-mail interface OR to get RC filled with mindless bugs and crappy features/decisions? Most likely the latter as NC’s “management” is what it is.
My second question about this merge is what is gonna happen with the Kolab guys (kolab.org / kolabnow.com) as they’ve been the ones actually “paying the bill” and investing serious development time into RoundCube and into useful plugins such as CardDAV and CalDAV that are actually better than anything NextCloud has done to this day. Their funding comes from their e-mail hosting service that is somehow in competition with NextCloud. Around 2006 Kolab also raised more than $100k USD to develop RoundCube so… that’s the kind of investment they’ve been working under.
Like nextcloud maps. In their blog they wrote a post over one year ago describing it as the next big thing after sliced bread.
I’ve been using Fedora for a couple of months now, and have been loving it. Very soon after I jumped into this community (among other Linux communities) and started laughing at all the people saying “KDE rules, GNOME drools,” and “GNOME is better, KDE is for babies.” But then I thought, “Why not give KDE a try? The...
Xfce works better everywhere and with everything, however it falls to the same pitfall that KDE has, eventually you’ll require some libadwaita application, flatpak and whatnot and then you’ll end up with a Frankenstein system half Xfce half GNOME components and themes that don’t apply to all apps equally. :(
I haven’t seen this posted yet here, but anybody self-hosting OwnCloud in a containerized environment may be exposing sensitive environment variables to the public internet. There may be other implications as well.
This is just a bad practice that was popularized by CI/CD solutions and later on by containers. I’m not saying containers aren’t good for security, what I’m saying is that they’re misused and abused and that images shouldn’t even be a thing. Isolation is great, blindingly trusting images made by someone and/or having people that don’t have any basic knowledge of security nor infrastructure suddenly being able to deploy complex solutions with a click ends up in situations like this.
My point was that “random deb” and/or “random web application” are way less likely to come with unsafe default ENV based configuration files and usually go with the config files securely stored in system directories with the proper permissions enforced during installation or simple .php config files that won’t get exposed and that will require the user to configure in a proper way (like WordPress does by providing wp-config-sample.php but not the final wp-config.php file required to run it). Those are the solutions people used before the containerization hype and things were just fine.
My second point: containers “lowered the bar”, allowing for almost anyone to be able to deploy complex solutions and this was / is bound to be a disaster. No matter how safe Docker and others become we can’t just expect people who know almost nothing about computers and networking to be able to safely deploy things. Even the ones that know a lot, like developers, sometimes use Docker to deploy things they wouldn’t be able to deploy otherwise and fall to the pitfalls of not understanding networking and computer security.
In any case, I would not expose such application outside of a VPN, which is a blanket security practice that most selhosters should do for most of their services…
Well, me too, however I understand that some people might want to expose it publicly because they might want to access their instances from public or work machines where they can’t install a VPN. For those cases I would recommend 2FA and only allowing specific IP addresses or ranges access to the thing - the office static IP, their ISP or eventually only the user’s country.
I’m looking to try self-hosting an XMPP server for my family to use as a secure communication platform. I realize that end2end encryption with XMPP doesn’t seem as strong as something like Matrix but my self-hosted Matrix server has been very unreliable....
It’s as if a handwritten signature, even in PNG form, has a magical superpower to make a document authentic. A bit like the security theater at entrances to buildings and transport.
While Germany cards doesn’t seem to have a digital / smartcard component, French ones do. In Portugal and Spain at least you’re required to sign digital documents with your identity card, using a smartcard reader + a small utility app provided by the gov. Only those have legal value and this is enforced. Scanned handwritten signatures have zero value, and I know this also applied for other EU countries.
A government doesn’t need to take away your papers to deny you its services.
Yes, people just need to be dumb enough to vote the typical half communist and half socialist parties to power and they’ll take care of ruining public services for everyone in equal measure. :)
MS Outlook Blocking Tutanota Emails As Spam (tuta.com)
Started in mid November and despite repeated requests from Tuta(nota) and reassurances from MS, it’s still happening and MS have gone silent on the subject.
Windows 10 support doesn't end in 2025 after all, if you pay up (www.xda-developers.com)
Is the Linux Foundation Certified System Admin (LFCS) worth it?
I’ve been a software engineer for 10 years now but want to work with Linux more in a professional setting (not to mention the number of layoffs in the the dev industry has me thinking a backup plan might be a good idea). I have been using Linux exclusively on my personal machine for about 15 years now so I’m not too worried...
What modern solutions are there similar to dyndns?
Hi, started self hosting quite a few things and would like a domain to use for when I’m out and about instead of having to remember what my IP is currently. What are the newest providers of services that can accomplish this?
Arch or NixOS?
I’ve been here a week ago already asking if Arch would be fine for a laptop used for university, as stability is a notable factor in that and I’m already using EndeavourOS at home, but now I’m curious about something else too - what about Arch vs NixOS?...
Incus has joined LinuxContainers (LXC) (linuxcontainers.org)
I stumbled upon this interesting platform and thought I’d share....
Is there a tool to real-time encrypt folders?
I’m looking for a veracrypt/truecrypt equivalent but at the folder-level and dynamic sizing...
Does anybody know where to get analytical/academic software?
I thought my university provided a copy of Mnova, but they didn’t. Are there repositories for this kind of software.
Help creating standalone disconnected from internet preloaded music player
I would like to build a music player / device for an older person, not tech savvy user....
Ethical cloud (VPS) provider recommendation
Does such a thing exist? A coop cloud provider? A unonized cloud provider? An ethical cloud provider? A good guy cloud provider?
Do you run a private CA? Could you tell me about your certificate setup if you do?
Hi, I was looking at private CAs since I don’t want to pay for a domain to use in my homelab....
I've tried ownCloud.
Hey there!...
Made the switch to KDE
I’ve been using Fedora for a couple of months now, and have been loving it. Very soon after I jumped into this community (among other Linux communities) and started laughing at all the people saying “KDE rules, GNOME drools,” and “GNOME is better, KDE is for babies.” But then I thought, “Why not give KDE a try? The...
Disclosure of sensitive credentials and configuration in containerized deployments - ownCloud (owncloud.com)
I haven’t seen this posted yet here, but anybody self-hosting OwnCloud in a containerized environment may be exposing sensitive environment variables to the public internet. There may be other implications as well.
XMPP Server?
I’m looking to try self-hosting an XMPP server for my family to use as a secure communication platform. I realize that end2end encryption with XMPP doesn’t seem as strong as something like Matrix but my self-hosted Matrix server has been very unreliable....
EP rejects mass scanning of private messages - European Digital Rights (EDRi) (edri.org)
Vyos adds git commit archive support (blog.vyos.io)
Can I use two different drives?
I am building a NAS in RAID 1 (Mirror) mode. Should I buy 2 of the same drive from the same manufacturer? or does it not matter so much?