Upgrading my TrueNAS Mini X+ from TrueNAS Scale 23.10.2 to 24.04.0 was smooth.
I had to change the location of the node_exporter program from /usr/bin to a new location, which is under a separate dataset I created for local apps (not to be confused with the TrueNAS Scale Applications dataset).
The /usr directory, which is mounted from boot-pool/ROOT/24.04.0/usr, is now mounted as read-only.
@vwbusguy@dymaxoid hmmm, need to think about path forward. I'm not keen on debian host, nor having kube on my low power nas. Perhaps some coreos based with podman and cockpit. Well, it's been good ride, thanks for freenas and later ixsystems.
⚠️ TrueNAS CORE 13 is the end of the FreeBSD version | The Register
「 We have no plans for a FreeBSD 14-based TrueNAS at this time, and the 13.1 release will be a longer-lived maintenance train for those who want to continue running on the BSD product before migrating to SCALE later at some later date 」
@bluGill I will suggest you to check all products/solutions that can replace this one, and write a pros/cons sheet for each one.
Then you will be able to choose the right product.
After watching videos from @tomlawrence and @technotim about #TrueNAS I'm still somewhat undecided how to setup my 12-disk server with a mix of old/new disks for a general purpose storage (backup, iSCSI targets for VMs, etc...), but it seems like to have this kind of setup:
sdm: system disk (SATA DOM)
8 or 12 disks as 2-disk mirrors in a pool with NVMe for SLOG and maybe 2 or 4 SSDs for L2ARC, depending on whether I can fit them inside the 2 RU server or if I need to use drive bays for that.
@ij@tomlawrence@technotim lesser known fact: on #zfs, disabling sync writes with sync=disabled is more efficient than using a zil slog, if sync writes are not required, that is, loss of a couple of seconds worth of writes is acceptable.
@madalex Full disk encryption:
I Do like full disk encryption but i don't really see the sense of it in my use case.
When i throw them away they will have seen a 13mm metal drill going through them, usually twice.
Good luck getting data of that drive.
Snapshots: I don't want to waste all of my space with snapshots. I'll just make a daily backup with N+2 days.
So I'm using #ZFS on #TrueNAS and today I noticed that "auto-trim" is turned off on my ZFS pool. "Hmm," I asked myself "what is TRIM on ZFS?"
After a few minutes of searching, I have no idea what TRIMming does. I know a hundred ways to do "it" manually or automatically. But I don't know what it DOES.
So I finally found this presentation from 2019 that pretty well lays out what it is and why it exists. My drives, however, are rotating magnetic drives (just like in Victorian times), so I'm not sure there's any value in TRIMming my ZFS. Thoughts?
Weekend project: upgraded our #trueNAS system with #ZFS and a RAIDz1 pool from 4 TB to 16 TB. Easy peasy! #FreeBSD with ZFS was one of the best choices for our NAS. Combined with #restic it is the best backup solution.
Habt ihr nen entspanntes script für zfs backups (zfs snapshot|zfs send|zfs receive) zur Hand?
Hab jetzt mal plan umgesetzt die Workstation mit nixos auf linux #zfs pool umzusetzen und würd einige fs (home etc) gerne ohne viel gefrickel regelmäßig inkrementell in den #truenas freebsd zfs pool zwecks backup rüber schieben.
Question for the #ZFS people here. I have an ancient #TrueNAS SCALE machine which I am renting from my hosting provider. It has a Xeon W3520 (4C/8T @ 2.93GHz), 8GB RAM, 4x3TB spinning rust, in a single pool with two mirrored vdevs.
I'm using this machine as iSCSI backend for my virtualization hosts. It's slow AF. I get extremely poor IOPS. I'm pretty sure it's the disks that are the bottleneck. ... (continued)
@jpm@voltagex Once did a product which shipped on Unix with a TCL shell for the user accounts (the client also got a 'maintenance' account with usual Bash access). It worked better than it had any right to.
The major client sent back a series of enhancement requests (scripting, a GUI, a API) which that decision had pretty much forced on us, and so we turned around at low cost. Which lead to a lecture from the product manager about the true purpose of ERs in leaching more cash from the client.
@art4857 z doświadczenia... to jak ram jest uwalony, albo źle ustawiony to wywali się w pierwszych 30 sekundach (max czas na osiągnięcie maksymalnej temperatury)
Sometimes having your own server and hosting stuff yourself can be a real struggle. A few days ago I noticed that all my apps on my TrueNAS server were down and after investigating I noticed that all apps were stuck deploying and my server was at 100% CPU load. Turns out that kubernetes (k3s) is somehow stuck and eating up resources. I even had trouble stopping all my apps and rebooting the system didn't solve the problem (would have been to easy I guess). Well let's see if the ticket I created will be any help.