filesystem defenders act like it's natural for a computer to have a filesystem. meanwhile filesystem implementors are hard at work convincing me that not only is a filesystem a bad idea, it's also virtually impossible to implement any nontrivial optimizations in one without catastrophic data loss bugs
Du bon usage du cache, de la compression et du chiffrement
Base Postgres de 300Go, qui descend à 100Go avec la compression avant chiffrement et stockage sur SSD. Tout tient du coup en RAM et quasiment plus d’I/O
I do love these new SSDs that I got over the weekend. They're soooo quiet, nice and fast, in particular with no latency when they spin up. That's all my toy budget for the month gone, but I think I'll buy the same again next month to replace the volume that stores my backups. That's still got oodles of space left, but the quiet is nice, and while spin-up time doesn't matter for my backups, having basically zero seek time will really help a lot.
re toots from a few days ago, I'm using #APFS instead of the much better #ZFS, because ZFS just didn't work very well on Mac OS when I played around with it a few days ago. So in the slightly longer term I'm looking for a cheap machine on which I can run #FreeBSD and ZFS, which will support at least 6 x 2.5" SSDs in its own chassis, all hot-swappable without opening it up and without tools, with at least two eSATA ports. Recommendations for something which will Just Work with FreeBSD please!
In #postgresql database land today. Thinking through striping and mirroring on #ZFS. Klara Systems has unpacked a lot of this:
... This means two things: solid state and mirrors. SSD drives provide far lower latency than conventional drives possibly can. And mirrors provide far better random access performance than RAIDz can—particularly when we’re talking about small random access ops, which can’t be effectively spread across tons of disks in a wide striped vdev...
Am I missing something obvious with #zfs? All the vdevs in my pool have names like /dev/disk5s1, and the numbers can change depending on what order they power up. I can't find any way of giving them names like "ssd-with-red-sticker-on", "ssd-with-blue-sticker-on" and so on, which would make swapping out a failed disk soooo much easier.
Last I tried screwing around with #zfs, #KVM and #qemu on #slackware I had a bunch of fun with scripts from sbo and dependency hell – since neither is an official slack package.
How does this fare in 2024 if I were to try to get a headless host for my stuff (qubes-like but I like pain)?
This post is really a small collection of thoughts about Proxmox when used in a home lab situation and home labs in general. I was originally going to post this to Mastodon only but it didn't fit in a single post.
A lot of people (at least what I see on reddit) build Proxmox systems with shared file systems like ceph, even for home lab use.
As my old NUC was showing its age, and didn't suffice for my #homelab needs anymore, I decided to build a new one. And because I prefer running all my #selfhosted in containers, but abhor fucking around with #docker and docker-compose, it's a single-node #k8s cluster, using #k3s, just like my old server. One big difference is that the new server has a decent amount of drives for storage. I decided to set up #zfs to manage that, and zfs is all it's cracked up to be.
ZFS on root is back in the Ubuntu installer but there’s a better way to do it, next-generation hard drives are proving to be reliable but prices are going up thanks to storage-hungry AI, why getting started with ZFS is really easy, and the best filesystem for a single SSD (take a guess).
Today I pondered something: Proxmox and others boast native ZFS integration as one of their strengths. Many Proxmox features rely on ZFS's unique capabilities, and many setups are built around them. If Oracle were to send a cease and desist tomorrow, how would the situation unfold?
I just asked Llama3 Instruct (7B Q8): "Tell me all you know about ZFS on OpenBSD"
Llama3 replied: "A great topic!
ZFS is a combined file system and logical volume manager designed by Sun Microsystems, now owned by Oracle. It's known for its advanced features, high performance, and reliability. On OpenBSD, ZFS is available as part of the base system since version 5.4 (released in 2011). Here are some key aspects of using ZFS on OpenBSD:
Installation
To install ZFS on OpenBSD, you'll need to specify zfs as your file system during installation. This will create a ZFS pool with a single vdev (virtual device) containing the root file system.
ZFS Pool
A ZFS pool is the top-level container for all ZFS storage. It can contain one or more virtual devices (vdevs), which are used to store data and metadata. On OpenBSD, you'll typically create a single pool with a single vdev containing the root file system.
VDEVs
Vdevs are the building blocks of a ZFS pool. They're composed of one or more disk devices, and can be configured in various ways (e.g., RAID-Z, mirror, single-disk). OpenBSD supports the following vdev types:
raidz: A type of RAID-5, providing redundancy with data striping.
mirror: A simple mirroring configuration for each block.
single: A single disk vdev, used for a standalone pool or for testing.
ZFS Features
Some notable ZFS features available on OpenBSD:
Snapshots: Instantaneous copies of your file system at a specific point in time. You can take snapshots, clone them, and even revert to previous states if needed.
Send/Receive: Allows you to send and receive entire ZFS pools or datasets between systems.
Checksums: Data integrity checks are performed on all data written to disk.
Compressed: Compression is supported for both data and metadata, reducing storage requirements.
Quotas: Limits can be set for individual datasets to prevent excessive usage.
OpenBSD-specific ZFS settings
Some OpenBSD-specific configuration options:
zfs_load: Controls whether the ZFS module is loaded at boot time. Set it to YES in /etc/sysctl.conf to enable ZFS support.
zfs_root: Specifies the root file system dataset name. Typically set to tank/root.
zfs_pool: Sets the default pool name (e.g., tank).
ZFS CLI
The OpenBSD ZFS command-line interface (CLI) is similar to other Unix-like systems:
# zpool list # List all pools and their status
# zfs list # List all datasets in a pool
# zfs snapshot # Create a new snapshot of a dataset
# zfs clone # Clone a snapshot or dataset
# zfs send # Send a dataset or snapshot to another system
Tips and Considerations
Be cautious when using ZFS, as some operations (e.g., zpool destroy) can be destructive.
Regularly check your pool's health with zpool status and ensure you have enough disk space.
Use zfs snapshots to create regular backups of your file systems.
This is just a brief overview of using ZFS on OpenBSD. If you have specific questions or concerns, feel free to ask!"
We're safe. And I should ask Theo why he's always lied and never announced ZFS support on OpenBSD. 😆
#ZFS was listed as experimental for #NetBSD 9. Per Wiki <https://wiki.netbsd.org/zfs/>, I would rather use FreeBSD which supports root on ZFS without the detour of FFS;
availability & support of Rust software as Python ecosystem seems to be using more of that as time goes by.
I personally need to check the situation in #FreeBSD with Intel CPUs with all E cores.