@vegetaaaaaaa@lemmy.world avatar

vegetaaaaaaa

@vegetaaaaaaa@lemmy.world

This profile is from a federated server and may be incomplete. Browse more on the original instance.

vegetaaaaaaa, (edited )
@vegetaaaaaaa@lemmy.world avatar

sftp://USERNAME@SERVER:PORT in the address bar of most file managers will work. You can omit the port if it’s the default (22), you can omit the username if it’s the same as your local user.

You can also add the server as a favorite/shortcut in your file manager sidebar (it works at least in Thunar and Nautilus). Or you can edit ~/.config/gtk-3.0/bookmarks directly:


<span style="color:#323232;">file:///some/local/directory
</span><span style="color:#323232;">file:///some/other/directory
</span><span style="color:#323232;">sftp://my.example.org/home/myuser my.example.org
</span><span style="color:#323232;">sftp://otheruser@my.example.net:2222/home/otheruser my.example.net
</span>

How responsive is your Nextcloud?

My Nextcloud has always been sluggish — navigating and interacting isn’t snappy/responsive, changing between apps is very slow, loading tasks is horrible, etc. I’m curious what the experience is like for other people. I’d also be curious to know how you have your Nextcloud set up (install method, server hardware, any...

vegetaaaaaaa,
@vegetaaaaaaa@lemmy.world avatar

Quite fast.

KVM/libvirt VM with 4GB RAM and 4vCores shared with a dozen other services, storage is not the fastest (qcow2-backed disks on a ext4 partition inside a LUKS volume on a 5400RPM hard drive… I might move it so a SSD sometime soon) so features highly dependent on disk I/O (thumbnailing) are sometimes sluggish. There is an occasional slowdown, I suppose caused by APCu caches periodically being dropped, but once a page is loaded and the cache is warmed up, it becomes fast again.

Standard apache + php-fpm + postgresql setup as described in the Nextcloud official documentation, automated through this ansible role

vegetaaaaaaa, (edited )
@vegetaaaaaaa@lemmy.world avatar

VMs have a lot of additional overhead.

The overhead is minimal, KVM VMs have near-native performance (type 1 hypervisor). There is some memory overhead as each VM runs its own kernel, but a lot of this is cancelled by KSM [1] which is a memory de-duplication mechanism.

Each VM runs its own system services (think systemd, logging, etc) so there is some memory/disk usage overhead there - but it would be the same with Incus/LXC as they do the same thing (they only share the same kernel).

serverfault.com/…/so-really-what-is-the-overhead-…

I usually go for bare-metal > on top of that, multiple VMs separated by context (think “tenant”, production/testing, public/confidential/secret, etc. VMs provide strong isolation which containers do not. At the very minimum it’s good to have at least separate VMs for “serious business” and “lab” contexts) > applications running inside the VMs (containerized or not - service/application isolation through namespaces/systemd has come a long way, see man systemd-analyze security) - for me the benefit of containerization is mostly ease of deployment and… ahem running inscrutable binary images with out-of-date dependencies made by strangers on the Internet)

If you go for a containerization solution on top of your VMs, I suggest looking into podman as a replacement for Docker (less bugs, less attack surface, no single-point-of-failure in the form of a 1-million-lines-of-code daemon running as root, more unix-y, better integration with systemd [2]. But be aware of the maintenance overhead caused by containerization, if you’re serious about it you will probably end up maintaining your own images)

vegetaaaaaaa,
@vegetaaaaaaa@lemmy.world avatar

Obfuscation can be helpful in not disclosing which are some services or naming schemes

The “obfuscation” benefits of wildcard certificates are very limited (public DNS records can still easily be found with tools such as sublist3r), and they’re definitely a security liability (get the private key of the cert stolen from a single server -> TLS potentially compromised on all your servers using the wildcard cert)

What's a simple logging service?

Hiya, I’m looking to keep track of my different services in hosting via Unraid. Right now I’m hosting roughly 12 different services, but would be nice to have the logs of all my services in one place, preferably with a nice GUI. Are there any such services that could easily connect to the different docker containers I have...

vegetaaaaaaa, (edited )
@vegetaaaaaaa@lemmy.world avatar

Syslog over TCP with TLS (don’t want those sweet packets containing sensitive data leaving your box unencrypted). Bonus points for mutual authentication between the server/clients (just got it working and it’s 👌 - my implementation here

It solves the aggregation part but doesn’t solve the viewing/analysis part. I usually use lnav on simple setups (gotty as a poor man’s web interface for lnav when needed), and graylog on larger ones (definitely costly in terms of RAM and storage though)

vegetaaaaaaa,
@vegetaaaaaaa@lemmy.world avatar

Would it be better to just have one PostgreSQL service running that serves both Nextcloud and Lemmy

Yes, performance and maintenance-wise.

If you’re concerned about database maintenance (can’t remember the last time I had to do this… Once every few years to migrate postgres clusters to the next major version?) bringing down multiple services, setup master-slave replication and be done with it

vegetaaaaaaa,
@vegetaaaaaaa@lemmy.world avatar

/thread

This is my go-to setup.

I try to stick with libvirt/https://manpages.debian.org/bookworm/libvirt-clients/virsh.1.en.html when I don’t need any graphical interface (integrates beautifully with ansible [1]), or when I don’t need clustering/HA (libvirt does support “clustering” at least in some capability, you can live migrate VMs between hosts, manage remote hypervisors from virsh/virt-manager, etc). On development/lab desktops I bolt virt-manager on top so I have the exact same setup as my production setup, with a nice added GUI. I heard that cockpit could be used as a web interface but have never tried it.

Proxmox on more complex setups (I try to manage it using ansible/the API as much as possible, but the web UI is a nice touch for one-shot operations).

Re incus: I don’t know for sure yet. I have an old LXD setup at work that I’d like to migrate to something else, but I figured that since both libvirt and proxmox support management of LXC containers, I might as well consolidate and use one of these instead.

vegetaaaaaaa, (edited )
@vegetaaaaaaa@lemmy.world avatar

In my experience and for my mostly basic needs, major differences between libvirt and proxmox:

  • The “clustering” in libvirt is very limited (no HA, automatic fencing, ceph inegration, etc. at least out-of-the box), I basically use it to 1. admin multiple libvirt hypervisors from a single libvirt/virt-manager instance 2. migrate VMs between instances (they need to be using shared storage for disks, etc), but it covers 90% of my use cases.
  • On proxmox hosts I let proxmox manage the firewall, on libvirt hosts I manage it through firewalld like any other server (+ libvirt/qemu hooks for port forwarding).
  • On proxmox I use the built-in template feature to provision new VMs from a template, on libvirt I do a mix of virt-clone and virt-sysprep.
  • On libvirt I use virt-install and a Debian preseed.cfg to provision new templates, on proxmox I do it… well… manually. But both support cloud-init based provisioning so I might standardize to that in the future (and ditch templates)
vegetaaaaaaa, (edited )
@vegetaaaaaaa@lemmy.world avatar

Did you read? I specifically said it didn’t, at least not out-of-the-box.

vegetaaaaaaa,
@vegetaaaaaaa@lemmy.world avatar

I should RTFM again… manpages.debian.org/bookworm/…/virsh.1.en.html has options for virsh migrate such as –copy-storage-all… Not sure how it would work for actual live migrations but I will definitely check it out. Thanks for the hint

vegetaaaaaaa,
@vegetaaaaaaa@lemmy.world avatar

The migration is bound to happen in the next few months, and I can’t recommend moving to incus yet since it’s not in stable/LTS repositories for Debian/Ubuntu, and I really don’t want to encourage adding third-party repositories to the mix - they are already widespread in the setup I inherited (new gig), and part of a major clusterfuck that is upgrade management (or the lack of). I really want to standardize on official distro repositories. On the other hand the current LXD packages are provided by snap (…) so that would still be an improvement, I guess.

Management is already sold to the idea of Proxmox (not by me), so I think I’ll take the path of least resistance. I’ve had mostly good experiences with it in the past, even if I found their custom kernels a bit strange to start with… do you have any links/info about the way in which Proxmox kernels/packages differ from Debian stable? I’d still like to put a word of caution about that.

vegetaaaaaaa, (edited )
@vegetaaaaaaa@lemmy.world avatar

clustering != HA

The “clustering” in libvirt is limited to remote controlling multiple nodes, and migrating hosts between them. To get the High Availability part you need to set it up through other means, e.g. pacemaker and a bunch of scripts.

vegetaaaaaaa, (edited )
@vegetaaaaaaa@lemmy.world avatar

DO NOT migrate / upgrade anything to the snap package

It was already in place when I came in (made me roll my eyes), and it’s a mess. As you said, there’s no proper upgrade path to anything else. So anyway…

you should migrate into LXD LTS from Debian 12 repositories

The LXD version in Debian 12 is buggy as fuck, this patch has not even been backported github.com/canonical/lxd/issues/11902 and 5.0.2-5 is still affected. It was a dealbreaker in my previous tests, and doesn’t inspire confidence in the bug testing and patching process on this particular package. On top of it, It will be hard to convice other guys that we should ditch Ubuntu and their shenanigans, and that we should migrate to good old Debian (especially if the lxd package is in such a state). Some parts of the job are cool, but I’m starting to see there’s strong resistance to change, so as I said, path of least resistance.

Do you have any links/info about the way in which Proxmox kernels/packages differ from Debian stable?

vegetaaaaaaa, (edited )
@vegetaaaaaaa@lemmy.world avatar

“buggy as fuck” because there’s a bug that makes it so you can’t easily run it if your locate is different than English?

It sends pretty bad signals when it causes a crash on the first lxd init (sure I could make the case that there are workarounds, switch locales, create the bridge, but it doesn’t help make it appear as a better solution than proxmox). Whatever you call it, it’s a bad looking bug, and the fact that it was not patched in debian stable or backports makes me think there might be further hacks needed down the road for other stupid bugs like this one, so for now, hard pass on the Debian package (might file a bug on the bts later).

About the link, Proxmox kernel is based on Ubuntu, not Debian…

Thanks for the link mate, Proxmox kernels are based on Ubuntu’s, which are in turn based on Debian’s, not arguing about that - but I was specifically referring to this comment

having to wait months for fixes already available upstream or so they would fix their own shit

any example/link to bug reports for such fixes not being applied to proxmox kernels? Asking so I can raise an orange flag before it gets adopted without due consideration.

Password Manager that supports multiple databases/syncing?

I currently use keePass, and use it on both my PC and my phone. I like it because I can keep a copy of my DB on my phone and export it through a few different means. But I can’t seem to find an option to actually sync my local DB against a remote one. I’ve thought about switching to BitWarden but from what I can see it uses...

vegetaaaaaaa,
@vegetaaaaaaa@lemmy.world avatar

Why not self host vaultwarden?

How does that work when your vaultwarden instance goes down for some reason? Lose access to passwords? Or does the browser extension still have access to a cached copy of the db?

vegetaaaaaaa,
@vegetaaaaaaa@lemmy.world avatar

but more like playing a video game and it drops down to 15fps

Likely not a server-side problem (check CPU usage on the server), if the server was struggling to transcode I think it would result in the playback pausing, and resuming when the encoder catches up. Network/bandwidth problems would result in buffering. This looks like a bad playback performance problem, what client are you using? Try with multiple clients (use the web interface ina browser as a baseline) and see if it makes any difference.

vegetaaaaaaa,
@vegetaaaaaaa@lemmy.world avatar

i was just worried that the libraries in the container image are outdated

They actually are: trivy scan on authelia/authelia:latest pastebin.com/raw/czCYq9BF

How would I automate (VM/LXC)-agnostic templates in Proxmox without creating golden images?

For context: I want to automatically enable Intel SGX for every VM and LXC in Proxmox, but it doesn’t seem like there’s a way to do it using APIs AFAIK (so Terraform is out of the question unless I’ve missed something) other than editing the template for the individual LXC/VM....

vegetaaaaaaa,
@vegetaaaaaaa@lemmy.world avatar

I would check enabling it from cloud-init and/or during an initial provisioning step using ansible

vegetaaaaaaa,
@vegetaaaaaaa@lemmy.world avatar

I would have liked for this to be possible directly through Terraform

Is it this proxmox provider? It does allow specifying cloud-init settings: registry.terraform.io/…/cloud_init_disk. So you can use https://saturncloud.io/blog/how-to-properly-use-runcmd-and-scriptsuser-in-cloudinit/#2 or similar to do whatever is needed inside the host to enable Intel SGX, during the terraform provisioning step.

AppArmour support for VMs, which is a secure enclave too (if I understand correctly).

Nope, Apparmor is a Mandatory Access Control (MAC)) framework [1], similar to SELinux. It complements traditional Linux permissions (DAC, Discretionary Access Control). Apparmor is already enabled by default on Debian derivatives/Ubuntu.

vegetaaaaaaa,
@vegetaaaaaaa@lemmy.world avatar

I was under the impression that cloud-init could only really be used to run commands inside the guest?

Yes that’s correct, I didn’t realize you had something to do outside the guest to enable it. What exactly? How do you solve it manually for now?

vegetaaaaaaa,
@vegetaaaaaaa@lemmy.world avatar

I see, agree with you that it should be supported by the terraform provider if it is at the VM .conf level… maybe a new attribute in registry.terraform.io/providers/…/vm_qemu#smbios-…? I would start by requesting this feature in github.com/Telmate/…/issues, and maybe try to add it yourself? (scratch your own itch, fix it for everyone in the process). Good luck

vegetaaaaaaa,
@vegetaaaaaaa@lemmy.world avatar

Lemmy/Reddit feeds, Yotube channels and other video hosting sites, IT/technical blogs and websites, software releases, newspapers, personal blogs… Currently there are 651 feeds in my feed reader. Actually found this post in my RSS feeds.

vegetaaaaaaa,
@vegetaaaaaaa@lemmy.world avatar

So much server-side code :/ I wrote my own in pure HTML/CSS which gets rebuilt by ansible depending on services installed on the host. Basic YAML config for custom links/title/message.

Next “big” change would be a dark theme, but I get by with Dark Reader which I need for other sites anyway. I think it looks ok

https://lemmy.world/pictrs/image/187f4b0c-e1f3-4486-9984-2a285ff632ab.png

vegetaaaaaaa, (edited )
@vegetaaaaaaa@lemmy.world avatar

You can probably use it by templating out github.com/nodiscc/xsrv/blob/…/index.html.j2 manually or using jinja2. basically remove the {% …%} markers and replace {{ … }} blocks with your own text/links.

You will need a copy of the res directory alongside index.html (images, stylesheet).

You can duplicate col-1-3 mobile-col-1-1 and col-1-6 mobile-col-1-2 and divs as many times as you like and they will arrange themselves on the page, responsively.

But yeah this is actually made with ansible/integration with my roles in mind.

vegetaaaaaaa,
@vegetaaaaaaa@lemmy.world avatar

In my basic tests (initial setup of an AD DC, just a lab environment), it does work through WinRM. What gave me a headache is tying to enable “secure” (TLS) WinrRM using a self-signed certificate. I should do some cleanup and post the setup someday.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • JUstTest
  • InstantRegret
  • magazineikmin
  • cubers
  • everett
  • rosin
  • Youngstown
  • GTA5RPClips
  • tester
  • slotface
  • khanakhh
  • Durango
  • kavyap
  • DreamBathrooms
  • normalnudes
  • ethstaker
  • ngwrru68w68
  • cisconetworking
  • thenastyranch
  • Leos
  • osvaldo12
  • anitta
  • mdbf
  • tacticalgear
  • modclub
  • megavids
  • provamag3
  • lostlight
  • All magazines