Should I move to Docker?

I’m a retired Unix admin. It was my job from the early '90s until the mid '10s. I’ve kept somewhat current ever since by running various machines at home. So far I’ve managed to avoid using Docker at home even though I have a decent understanding of how it works - I stopped being a sysadmin in the mid '10s, I still worked for a technology company and did plenty of “interesting” reading and training.

It seems that more and more stuff that I want to run at home is being delivered as Docker-first and I have to really go out of my way to find a non-Docker install.

I’m thinking it’s no longer a fad and I should invest some time getting comfortable with it?

Boomkop3,

It’s basically a vm without the drawbacks of a vm, why would you not? It’s hecking awesome

excitingburp,

Don’t learn Docker, learn containers. Docker is merely one of the first runtimes, and a rather shit one at that (it’s a bunch of half-baked projects - container signing as one major example).

Learn Kubernetes, k3s is probably a good place to start. Docker-compose is simply a proprietary and poorly designed version of it. If you know Kubernetes, you’ll quickly be able to pick up docker-compose if you ever need to.

You can use buildah bud (part of the Podman ecosystem) to build containerfiles (exactly the same thing as dockerfiles without the trademark). Buildah can also be used without containerfiles (your containerfiles simply becomes a script in the language of your choice - e.g. bash), which is far more versatile. Speaking of Podman, if you want to keep things really simple you can manually create a bunch of containers in a pod and then ask Podman to create a set of systemd units for you. Podman supports nearly all of what docker does (with exception to docker’s bjorked signing) and has identical command line syntax. Podman can also host a docker-compatible socket if you need to use it with something that really wants docker.

I’m personally a big fan of Podman, but I’m also a fan of anything that isn’t Docker: LXD is another popular runtime, and containerd is (IIRC) the runtime underpinning docker. There’s also firecracker or kubevirt, which go full circle and let you manage tiny VMs like containers.

GreatBlueHeron,

All that makes sense - except that I’m taking about 1or 2 physical servers at home and my only real motivation for looking into containers at all is that some software I’ve wanted to install recently has shipped as docker compose scripts. If I’m going to ignore their packaging anyway, and massage them into some other container management system, I would be happier just running them of bare metal like I’ve done with everything else forever.

WindowsEnjoyer,

If you have homelab and not using containers - you are missing out A LOT! Docker-compose is beautiful thing for homelab. <3

pete_the_cat,

It took me a while to convert to Docker, I was used to installing packages for all my Usenet and media apps, along with my webserver. I tried Docker here and there but always had a random issue pop up where one container would lose contact with the other, even though they were in the same subnet. Since most containers only contain the bare minimum, it was hard to troubleshoot connectivity issues. Frustrated, I would just go back to native apps.

About a year or so ago, I finally sat down and messed around with it a lot, and then wrote a compose file for everything. I’ve been gradually expanding upon it and it’s awesome to have my full stack setup with like 20 containers and their configs, along with an SSL secured reverse proxy in like 5-10 minutes! I have since broken out the compose file into multiple smaller files and wrote a shell script to set up the necessary stuff and then loop through all the compose files, so now all it takes is the execution of one command instead of a few hours of manual configuration!

Agent_Engelbert,

There are teachings I have read/ discovered through YouTube (can’t remember exactly where) about the reasons and the philosophy behind moving to docker, or having it as a state machine.

Have you considered looking into dockers alternatives, also ?

Here is 1 of the sources that may give you insights:

www.cloudzero.com/blog/docker-alternatives/

– There has been some concerns over docker’s licensing and, as such, some people have started preferring solutions such as podman and containerd.

Both are good in terms of compatibility and usability, however I have not used them extensively.

Nonetheless, I am currently using docker for my own hyperserver [Edit2: oops, I meant hypervisor ✓, not hyperserver] purposes. And I am also a little concerned about the future of docker, and would consider changing sometime in the future.

[Edit1: I am using docker because it is easy to make custom machines, with all files configurations, and deploy them that way. It is a time saver. But performance wise, I would not recommend it for major machines that contain major machine processes and services. And that’s just the gist of it].

Swarfega,

I’m a VMware and Windows admin in my work life. I don’t have extensive knowledge of Linux but I have been running Raspberry Pis at home. I can’t remember why but I started to migrate away from installed applications to docker. It simplifies the process should I need to reload the OS or even migrate to a new Pi. I use a single docker-compose file that I just need to copy to the new Pi and then run to get my apps back up and running.

linuxserver.io make some good images and have example configs for docker-compose

If you want to have a play just install something basic, like Pihole.

buedi,

I would absolutely look into it. Many years ago when Docker emerged, I did not understand it and called it “Hipster shit”. But also a lot of people around me who used Docker at that time did not understand it either. Some lost data, some had servicec that stopped working and they had no idea how to fix it.

Years passed and Containers stayed, so I started to have a closer look at it, tried to understand it. Understand what you can do with it and what you can not. As others here said, I also had to learn how to troubleshoot, because stuff now runs inside a container and you don´t just copy a new binary or library into a container to try to fix something.

Today, my homelab runs 50 Containers and I am not looking back. When I rebuild my Homelab this year, I went full Docker. The most important reason for me was: Every application I run dockerized is predictable and isolated from the others (from the binary side, network side is another story). The issues I had earlier with my Homelab when running everything directly in the Box in Linux is having problems when let´s say one application needs PHP 8.x and another, older one still only runs with PHP 7.x. Or multiple applications have a dependency of a specific library when after updating it, one app works, the other doesn´t anymore because it would need an update too. Running an apt upgrade was always a very exciting moment… and not in a good way. With Docker I do not have these problems. I can update each container on its own. If something breaks in one Container, it does not affect the others.

Another big plus is the Backups you can do. I back up every docker-compose + data for each container with Kopia. Since barely anything is installed in Linux directly, I can spin up a VM, restore my Backups withi Kopia and start all containers again to test my Backup strategy. Stuff just works. No fiddling with the Linux system itself adjusting tons of Config files, installing hundreds of packages to get all my services up and running again when I have a hardware failure.

I really started to love Docker, especially in my Homelab.

Oh, and you would think you have a big resource usage when everything is containerized? My 50 Containers right now consume less than 6 GB of RAM and I run stuff like Jellyfin, Pi-Hole, Homeassistant, Mosquitto, multiple Kopia instances, multiple Traefik Instances with Crowdsec, Logitech Mediaserver, Tandoor, Zabbix and a lot of other things.

shasta,

The backup and easy set up on other servers is not necessarily super useful for a homelab but a huge selling point for the enterprise level. You can make a VM template of your host with docker set up in it, with your Compose definitions but no actual data. Then spin up as many of those as you want and they’ll just download what they need to run the images. Copying VMs with all the images in them takes much longer.

And regarding the memory footprint, you can get that even lower using podman because it’s daemonless. But it is a little more work to set things up to auto start because you have to manually put it into systemd. But still a great option and it also works in Windows and is able to parse Compose configs too. Just running Docker Desktop in windows takes up like 1.5GB of memory for me. But I still prefer it because it has some convenient features.

MaximilianKohler,

It seems like docker would be heavy on resources since it installs & runs everything (mysql, nginx, etc.) numerous times (once for each container), instead of once globally. Is that wrong?

buedi,

You would think so, yes. But to my surprise, my well over 60 Containers so far consume less than 7 GB of RAM, according to htop. Also, of course Containers can network and share services. For external access for example I run only one instance of traefik. Or one COTURN for Nextcloud and Synapse.

outcide,
@outcide@lemmy.world avatar

Another old school sysadmin that “retired” in the early 2010s.

Yes, use docker-compose. It’s utterly worth it.

I was intensely irritated at first that all of my old troubleshooting tools were harder to use and just generally didn’t trust it for ages, but after 5 years I wouldn’t be without.

DasGurke,

I’m a little younger but in the same boat. There is some friction having filesystems, ports and processes “hidden” from your hosts programs that you typically rely on. But I needed them sooooo much less now that all my services are in Docker with exactly matching dependencies instead of rolling my eyes about running two PostgreSQL servers in different versions or juggling Python / node / Ruby versions with ASDF.

outcide,
@outcide@lemmy.world avatar

Yeah, so worth it! The first time I moved a service to a new box and realised all I had to do was copy the compose file and docker-compose up -d … I was sold.

Now I’m moving everything to Docker Swarm which is a new adventure. :-)

Smk,

I would never go back installing something without docker. Never.

pete_the_cat,

For a lot of smaller things I feel that Docker is overkill, or simply not feasible (package management, utilities like screenfetch, text editors, etc…) but for larger apps it definitely makes it easier once you wrap your head around containerization.

For example, I switched full-time to Jellyfin from Plex and was attempting to use caddy-docker-proxy to forward the host network that Jellyfin uses to the Caddy server, but I couldn’t get it to work automatically (explicitly defining the reverse proxy in the Caddyfile works without issue). I thought it would be easier to just install it natively, but since I hadn’t installed it that way in a few years I forgot that it pulls in like 30-40 dependencies since it’s written in .Net (or would that be C#?) and took a good few minutes to install. I said screw that and removed all the deps and went back to using the container and just stuck with the normal version of Caddy which works fine.

Gooey0210,

Nixos, nixos, nixos 🤌

milicent_bystandr,

Does Docker still give a security benefit over NixOS, because of the sandboxing?

dan,
@dan@upvote.au avatar

Not familiar with nixOS but there’s probably still isolation benefits to Docker. If you care a lot about security, make sure Docker is running in rootless mode.

Gooey0210,
fruitycoder,

Both! Sandboxing from containers and configuration control from nix go well together!

Gooey0210,

You can use the sandboxing of nixos

You get better performance, nixos level reproducibility, and it’s not docker which is not foss and running with root

purelynonfunctional,

The Nix daemon itself still uses root at build/install time for now. NixOS doesn’t have any built-in sandboxing for running applications à la Docker, though it does have AppArmor support. But then, NixOS doesn’t generally have applications run as root (containerized or otherwise), unlike Docker.

Gooey0210,

You don’t need to build/install with root, you can do home-manager

And for isolation there’s one good module, I forgot its name

And if just easier but less reproducible, you can do the containers, but with nixos’ podman, and this is of course builtin

fruitycoder,

I’m not sure honestly if we are agreeing or disagree lol

Nix for building OCI containers is great and Nixos seems like a great base system too. It seems like a natural step to take that and use it to define our a k8s system in the future as well.

I’m currently doing that with OpenTofu (Terraforms opensource successor) and Ansible but I feel like replacing those with nix may provide a real completeness to the codification of the OS.

Barring k8s though, at least until it’s gets so simply you might as well use it, podman is so far the go to way to run containers instead of Docker (for both of the reasons you mentioned!). That and flatpaks for GUI apps because of the portals system!

roofuskit,

Dear God yes. It makes life a lot easier. It’s very easy to use and understand.

I have a feeling that once you’ve used it you’ll be kicking yourself for not doing so sooner.

SpaceCadet,
@SpaceCadet@feddit.nl avatar

I think it’s a good tool to have on your toolbelt, so it can’t hurt to look into it.

Whether you will like it or not, and whether you should move your existing stuff to it is another matter. I know us old Unix folk can be a fussy bunch about new fads (I started as a Unix admin in the late 90s myself).

Personally, I find docker a useful tool for a lot of things, but I also know when to leave the tool in the box.

aniki,

Welcome to Docker! It’s fucking awesome! One thing to remember is:

PLEASE setup root-less Docker. It’s safer, easier [no sudo all the fking time.], and literally zero downsides other than the initial configuration – but the Docker docs are amazing and they have every distro on there. It’s only a few more steps and you can sleep secure in the fact that even in the WORST case scenario [compromised to root], they still only have a container running in userspace.

elscallr,
@elscallr@lemmy.world avatar

Yes. Containers are awesome in that they let you use an application inside a sandbox, but beyond that you can deploy it anywhere.

If you’re in the sysadmin world you should not only embrace Docker but I’d recommend learning k8s, too, if you still enjoy those things.

KyuubiNoKitsune,

K8s is awesome.

Presi300,
@Presi300@lemmy.world avatar

Yeh, I’m not a system admin in any meaning of the word, but docker is so simple that even I got around to figuring it out and to me it just exists to save time and prevent headaches (dependency hell)

onlinepersona,

Why wouldn’t you want to use containers? I’m curious. What do you use now? Ansible? Puppet? Chef?

Dyskolos,

Not OP, but, seriously asking, why should I? I usually still use VMs for every app i need. Much more work I assume, but besides saving time (and some overhead and mayve performance) what would I gain from docker or other containers?

SpaceCadet,
@SpaceCadet@feddit.nl avatar

what would I gain from docker or other containers?

Reproducability.

Once you’ve built the Dockerfile or compose file for your container, it’s trivial to spin it up on another machine later. It’s no longer bound to the specific VM and OS configuration you’ve built your service on top of and you can easily migrate containers or move them around.

Dyskolos,

But that’s possible with a vm too. Or am I missing something here?

uzay,

Apart from the dependency stuff, what you need to migrate when you use docker-compose is just a text file and the volumes that hold the data. No full VMs that contain entire systems because all that stuff is just recreated automatically in seconds on the new machine.

Dyskolos,

Ok, that does save a lot of overhead and space. Does it impact performance compared to a vm?

felbane,

If anything, containers are less resource intensive than VMs.

Dyskolos,

Thank you. Guess i really need to take some time to get into it. Just never saw a real reason.

felbane,

The great thing about containers is that you don’t have to understand the full scope of how they work in order to use them.

You can start with learning how to use docker-compose to get a set of applications running, and once you understand that (which is relatively easy) then go a layer deeper and learn how to customize a container, then how to build your own container from the ground up and/or containerize an application that doesn’t ship its own images.

But you don’t need to understand that stuff to make full use of them, just like you don’t need to understand how your distribution builds an rpm or deb package. You can stop whenever your curiosity runs out.

Dyskolos,

Won’t need to containerize my own stuff. Yet. But many apps just give a recent docker or some outdated manual install stuff. Hence why i get more and more annoyed/intrigued by docker 😁

Thanks for the guide!

twei,

If you update your OS, it could happen that a changed dependency breaks your app. This wouldn’t happen with docker, as every dependency is shipped with the application in the container.

Dyskolos,

Ah okay. So it’s like an escape from dependancy-hell… Thanks.

criticalimpact,

Saves time, minimal compatibility, portability and you can update with 2 commands There’s really no reason not to use docker

Dyskolos,

But I can’t really tinker IN the docker-image, right? It’s maintained elsewhere and I just get what i got. But with way less tinkering? Do I have control over the amount/percentage of resources a container uses? And could I just freeze a container, move it to another physical server and continue it there? So it would be worth the time to learn everything about docker for my “just” 10 VMs to replace in the long run?

xcjs, (edited )
@xcjs@programming.dev avatar

You can tinker in the image in a variety of ways, but make sure to preserve your state outside the container in some way:

  1. Extend the image you want to use with a custom Dockerfile
  2. Execute an interactive shell session, for example docker exec -it containerName /bin/bash
  3. Replace or expose filesystem resources using host or volume mounts.

Yes, you can set a variety of resources constraints, including but not limited to processor and memory utilization.

There’s no reason to “freeze” a container, but if your state is in a host or volume mount, destroy the container, migrate your data, and resume it with a run command or docker-compose file. Different terminology and concept, but same result.

It may be worth it if you want to free up overhead used by virtual machines on your host, store your state more centrally, and/or represent your infrastructure as a docker-compose file or set of docker-compose files.

Dyskolos,

Hm. That doesn’t really sound bad. Thanks man, I guess I will take some time to read into it. Currently on proxmox, but AFAIK it does containers too.

xcjs,
@xcjs@programming.dev avatar

It’s really not! I migrated rapidly from orchestrating services with Vagrant and virtual machines to Docker just because of how much more efficient it is.

Granted, it’s a different tool to learn and takes time, but I feel like the tradeoff was well worth it in my case.

I also further orchestrate my containers using Ansible, but that’s not entirely necessary for everyone.

Dyskolos,

I only use like 10 VMs, guess there’s no need for overkill with additional stuff. Though I’d like a gui, there probably is one for docker? Once tested a complete os with docker (forgot the name) but it seemed very unfriendly and ovey convoluted.

xcjs,
@xcjs@programming.dev avatar

There’s a container web UI called Portainer, but I’ve never used it. It may be what you’re looking for.

I also use a container called Watchtower to automatically update my services. Granted there’s some risk there, but I wrote a script for backup snapshots in case I need to revert, and Docker makes that easy with image tags.

There’s another container called Autoheal that will restart containers with failed healthchecks. (Not every container has a built in healthcheck, but they’re easy to add with a custom Dockerfile or a docker-compose.)

Dyskolos,

Thanks for the tips! But did i get it right here? A container can has access to other containers?

xcjs,
@xcjs@programming.dev avatar

The Docker client communicates over a UNIX socket. If you mount that socket in a container with a Docker client, it can communicate with the host’s Docker instance.

It’s entirely optional.

Dyskolos,

Ah okay. Sounds safe enough. Thanks again :-)

xcjs,
@xcjs@programming.dev avatar

Of course!

MostlyGibberish,

One of the things I like about containers is how central the IaC methodology is. There are certainly tools to codify VMs, but with Docker, right out of the gate, you’ll be defining your containers through a Dockerfile, or docker-compose.yml, or whatever other orchestration platform. With a VM, I’m always tempted to just make on the fly config changes directly on the box, since it’s so heavy to rebuild them, but with containers, I’m more driven to properly update the container definition and then rebuild the container. Because of that, you have an inherent backup that you can easily push to a remote git server or something similar. Maybe that’s not as much of a benefit if you have a good system already, but containers make it easier imo.

Dyskolos,

Actually only tried a docker container once tbh. Haven’t put much time into it and was kinda forced to do. So, if I got you right, I do define the container with like nic-setup or ip or ram/cpu/usage and that’s it? And the configuration of the app in the container? is that IN the container or applied “onto it” for easy rebuild-purpose? Right now I just have a ton of (big) backups of all VMs. If I screw up, I’m going back to this morning. Takes like 2 minutes tops. Would I even see a benefit of docker? besides saving much overhead of cours.

felbane,

You don’t actually have to care about defining IP, cpu/ram reservations, etc. Your docker-compose file just defines the applications you want and a port mapping or two, and that’s it.

Example:


<span style="color:#323232;">---
</span><span style="color:#323232;">version: "2.1"
</span><span style="color:#323232;">services:
</span><span style="color:#323232;">  adguardhome-sync:
</span><span style="color:#323232;">    image: lscr.io/linuxserver/adguardhome-sync:latest
</span><span style="color:#323232;">    container_name: adguardhome-sync
</span><span style="color:#323232;">    environment:
</span><span style="color:#323232;">      - CONFIGFILE=/config/adguardhome-sync.yaml
</span><span style="color:#323232;">    volumes:
</span><span style="color:#323232;">      - /path/to/my/configs/adguardhome-sync:/config
</span><span style="color:#323232;">    ports:
</span><span style="color:#323232;">      - 8080:8080
</span><span style="color:#323232;">    restart:
</span><span style="color:#323232;">      - unless-stopped
</span>

That’s it, you run docker-compose up and the container starts, reads your config from your config folder, and exposes port 8080 to the rest of your network.

Dyskolos,

Oh… But that means I need another server with a reverse-proxy to actually reach it by domain/ip? Luckily caddy already runs fine 😊

Thanks man!

felbane,

Most people set up a reverse proxy, yes, but it’s not strictly necessary. You could certainly change the port mapping to 8080:443 and expose the application port directly that way, but then you’d obviously have to jump through some extra hoops for certificates, etc.

Caddy is a great solution (and there’s even a container image for it 😉)

Dyskolos,

Lol…nah i somehow prefer at least caddy non-containerized. Many domains and ports, i think that would not work great in a container with the certificates (which i also need to manually copy regularly to some apps). But what do i know 😁

pete_the_cat,

VMs have a ton of overhead compared to Docker. VMs replicate everything in the computer while Docker just uses the host for everything, except it sandboxes the apps.

In theory, VMs are far more secure since they’re almost entirely isolated from the host system (assuming you don’t have any of the host’s filesystems attached), they are also OS agnostic whereas Docker is limited to the OS it runs on.

Dyskolos,

Ah ok thanks, the security-aspect is indeed important to me. So I shouldn’t really use it for critical things. Especially those with external access.

pete_the_cat,

Docker is still secure, it’s just less secure than Virtualization. It’s like a standard door knob lock (the twist/push button kind) vs a deadbolt. Both will keep 90% of bad-actors out but those who really want to get in can based on how high the security is.

GreatBlueHeron,

Currently no virtualisation at all - just my OS on bare metal with some apps installed. Remember, this is a single machine sitting in my basement running Samba and a couple of other things - there’s not much to orchestrate :-)

onlinepersona,

Oh, I thought you had multiple machines.

I use docker because each service I use requires different libraries with different versions. With containers, that doesn’t matter. It also provides some rudimentary security. If an attacker gets in, they’ll have to break out of the container first to get at the rest of the system. Each container can run with a different user, so even if they do get out of the container, at worst they’ll be able to destroy the data they have access to - well, they’ll still see other stuff in the network, but I think it’s better than being straight pwned.

pete_the_cat,

It makes deployments a lot easier once you have the groundwork laid (writing your compose files). If you ever need to nuke the OS reinstalling and configuring 20+ apps can only take a few minutes (assuming you still have the config data, which should live outside of the container).

For example, setting up my mediaserver, webserver, SQL server, and usenet suit of apps can take a few hours to do natively. Using Docker Compose it takes one command and about 5-10 minutes. Granted, I had to spend a few hours writing the compose files and testing everything, along with storing the config data, but just simply backing up the compose files with git means I can pull everything down quickly. Even if I don’t have the config files anymore it probably only takes like an hour or less to configure everything.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • selfhosted@lemmy.world
  • ethstaker
  • thenastyranch
  • GTA5RPClips
  • everett
  • Durango
  • rosin
  • InstantRegret
  • DreamBathrooms
  • magazineikmin
  • Youngstown
  • mdbf
  • slotface
  • cisconetworking
  • kavyap
  • JUstTest
  • normalnudes
  • modclub
  • cubers
  • ngwrru68w68
  • khanakhh
  • tacticalgear
  • tester
  • provamag3
  • Leos
  • osvaldo12
  • anitta
  • megavids
  • lostlight
  • All magazines