Lem453

@Lem453@lemmy.ca

This profile is from a federated server and may be incomplete. Browse more on the original instance.

Lem453, (edited )

Aurora dev edition is the bazzite equivalent for devs. Containers built right into the terminal (ptyxis).

Lem453,

Nobarra or bazzite are the way to go.

State of S3 - Your Laptop is no Laptop anymore (blog.jeujeus.de)

In this article, I aim to take a different approach. We will begin by defining a laptop according to my understanding. The I will share my personal history and journey to this point, as well as my current situation with my home and work laptops. Using this perspective, we will explore the current dysfunctionality of the standby...

Lem453, (edited )

I have an older XPS where where the CPU still supports deep sleep (S3).

Most distros have it disabled by default now because neither AMD not Intel seem to officially support it in new CPUs (so windows will have the same problem)

To check if your cpu supports it, you can run: journalctl | grep S1

You should see a message that says something like CPU supports S1 S2 S3 etc. if S3 is there then deep sleep is supported and can be enabled.

Ubuntu instructions: askubuntu.com/questions/1029474/…/1036122#1036122

Fedora desktop or atomic instructions: discussion.fedoraproject.org/t/…/4

Note, this is purely the fault of CPU manufacturers for being so shitty about proper sleep and yet another point that has to be conceeded to apple. Imagine explaining to a normal person that your XPS is really good and way cheaper than a Mac…but the batter will die overnight when you need it in the morning. Literally just shooting themselves in the foot.

Hibernate works as well but takes a bit longer. Hibernate also crashes in many modern systems but again works great in my older XPS. You have to manually activate this as well and it’s really not to bad with a good ssd.

That being said his should all be very basic functionality so why do I have to do this manually. This shit is why people buy Macs.

There’s also room for distros to improve here. The installer can probe the CPU and see if S3 is supported, if so it can use deep sleep automatically. Why do I have to mess with Kernal arguments?

Similar for hibernate, why doesn’t the installer just have a check box that sets up the hibernate file/partition?

Lem453,

See my post here

lemmy.ca/comment/9578675

Lem453,

This. S0idle was pushed by Microsoft and Intel and amd followed. Now all new non apple CPUs are an embarrassment when it comes to sleep ability which essentially any normal person would expect without thinking about it so when they buy a brand new laptop and it ends up with a dead batter every morning people immediately just buy a Mac and get a much better experience.

Just completely shooting themselves in the foot. Same story with shitty laptop screens for nearly 5 years while Macs had retina displays.

Lem453, (edited )

100% this. Sleep on Linux is perfect in my older XPS (after I manually enable it). Lots of reports of it not working on newer laptops.

While I agree it doesn’t have to be a walled garden, you do have to admit that apple wouldn’t ship a laptop that couldn’t sleep properly. They are so much better at real world design than other manufacturers who were happy to abandon s3 in favour of making laptops into phones as if anyone actually wanted that.

Lem453, (edited )

Self hosted AI seems like an intriguing option for those capable of running it. Naturally this will always be more complex than paying someone else to host it for you but it seems like that’s that only way if you care about privacy

github.com/mudler/LocalAI

Lem453,

I just changed my docker tag from :10.8.13 to :10.9.3 and had absolutely no issues.

Lem453, (edited )

Use the multi container extension for Firefox and have all your Google stuff in one container, banks in another, social media in another etc.

addons.mozilla.org/…/multi-account-containers/

Lem453,

Do you have a link that talks about this? What is missing?

Lem453,

Thrown away your current ssh client and get

xpipe.io

Lem453,

Do you want to have 2fa keys on all your devices? Doesn’t that defeat the purpose?

Lem453,

Me messing about with other docker applications. Seafile is one of the first things I setup on my server. I’ve been adding and playing around with dozens of different apps since then, many of them have numerous containers each. Usually I make the container without a defined storage until I get the compose working, then I set the volumes to be zfs array. When that happens the old default docker volumes remains unused.

Need to remember to delete them periodically

Lem453,

It’s not so much a seafile issue as it is a feature request for proper checksum verification of the files that are copied. The conditions where it it happened were an odd combination of having enough persistent storage, ram and cpu but lacking in ephemeral space…I think. The issue is that seafile failed silently.

Lem453,

Ya exactly this. I get optimizing for spee but there should at least be an option afterwards to check file integrity. Feels like a crucial feature for a critical system

Lem453,

Seafile has an sshfs style client for windows, mac and Linux. Rather than a traditional folder sync like Dropbox (which seafile also has), seadrive mounts a remote connection to your library that you can browse in your file explorer. I’ve only used the windows version, it has little cloud icons that show the files are not local and then you can right click a folder of file and “make available locally” to have offline access. This sounds exactly like that you are looking for. Full gui access to all files with no local storage needed unless you want.

I haven’t tried seadrive on Linux but they have the option on their site. I use the standard seafile-client on Linux and choose only certain libraries to use with no issues. On windows the seadrive is quite impressive in regard to how well it works.

help.seafile.com/…/drive_client_for_linux/

Lem453,

Next cloud has never stored the files themselves in a db. I’ve been using it since before it existed (own cloud) and then switched, it always has had a flat file storage that you can just backup and browse without the metadata from the database if you want.

Unfortunately that’s also part of it’s Achilles heel and why it’s so slow, it’s not optimized.

Are we going to see arch based immutable distros in the near future?

Hi there folks, I’m still learning about Linux and have yet to dip my toes properly in any arch based distro. Have for the moment fallen in love with the immutable distros based on Universal Blue project. However I do want to learn about what arch has to offer to and plan on installing default arch when I have time. But have...

Lem453,

The biggest issue with immutable OSs is the lack of containerized apps. Most devs simply don’t distribute their apps in flatpaks etc. Install fedora atomic. Fist think I want to do is install xpipe to manage my servers. Can’t be don’t in an unprivileged flatpaks. Great layer it on.

Let’s try seafile next to sync my files and projects…the flatpak is maintained by a random volunteer and most up to date version is from a year ago. Great, layer that in as well.

Let’s install a command line tool, before it was 1 line, now it’s a whole lot of googling only to discover that the best way is probably to just have a whole other package manager like brew

The concept is great and it has lots of potential, just it will only work if devs start packaging their stuff in a format that works with the new paradigm (containers)

Lem453,

I don’t think xpipe would work, it needs too many permissions.

Something like seafile would work, better than overlaying it I guess but still isn’t park of a package manager with easy auto updates etc like it would be if the devs published to flatpak.

At the end of the day it’s a lot more work that the promise of opening discover, searching an app and hitting install.

Lem453, (edited )

Is it not in the immich_pgdata or immich-app_pgdata folder?

The volumes themselves should be stored at /var/lib/docker/volumes

For future reference, doing operations like this without backing up first is insane.

Get borgmatic installed to take automatic backups and send them to a backup like another server or borgbase.

Lem453, (edited )

Awesome, take this close call as a kind reminder from the universe to backup!

Borg will allow incremental backups from any number of local folders to any number of remote locations. Borgmatic is a wrapper for it that also includes automated incremental borg backups.

I have a second server that runs this container: nold360/borgserver

Which works as a borg repository.

I also buy storage in borgbase and so every hour and incremental setup goes to both.

The other day I blew away a config folder by accident and restored it with no sweat in 2 mins.

Lem453,

This is one of the reasons I never use docker volumes. I bind mount a local folder from the host or mount and NFS share from somewhere else. Has been much more reliable because the exact location of the storage is defined clearly in the compose file.

Borg backup is set to backup the parent folder of all the docker storage folders so when I add a new one the backup solution just picks it up automatically at the next hourly run.

Secure portal between Internet and internal services

I thought I was going to use Authentik for this purpose but it just seems to redirect to an otherwise Internet accessible page. I’m looking for a way to remotely access my home network at a site like remote.mywebsite.com. I have Nginx proxy forwarding with SSL working appropriately, so I need an internal service that receives...

Lem453,

This is the way. This is the video I followed.

www.youtube.com/watch?v=liV3c9m_OX8

I use traefik as reverse proxy. I have externally accessible domains for and then extra secure internal only domains that require wireguard connection first as an extra layer of security.

Authentik can be used as a forward auth proxy and doesn’t care if it’s an internal or external domain.

Apps that don’t have good login or user management just get Authentik proxy for single sign on (sonarr, radar etc).

Apps that have oAuth integration get that for single sign on (seafile, immich, etc)

To make it work the video will talk about adding both the internal and external domains to the local DNS so that if you access it from outside it works and if you access from wireguard or inside the lan it also works.

Lem453,

What is the app running on? Can a browser on that device find the URL?

  • All
  • Subscribed
  • Moderated
  • Favorites
  • provamag3
  • InstantRegret
  • mdbf
  • ethstaker
  • magazineikmin
  • GTA5RPClips
  • rosin
  • thenastyranch
  • Youngstown
  • osvaldo12
  • slotface
  • khanakhh
  • kavyap
  • DreamBathrooms
  • JUstTest
  • Durango
  • everett
  • cisconetworking
  • Leos
  • normalnudes
  • cubers
  • modclub
  • ngwrru68w68
  • tacticalgear
  • megavids
  • anitta
  • tester
  • lostlight
  • All magazines