What's the deal with Docker?

I never understood how to use Docker, what makes it so special? I would really like to use it on my Rapsberry Pi 3 Model B+ to ease the setup process of selfhosting different things.

I’m currently running these things without Docker:

  • Mumble server with a Discord bridge and a music bot
  • Maubot, a plugin-based Matrix bot
  • FTP server
  • Two Discord Music bots

All of these things are running as systemd services in the background. Should I change this? A lot of the things I’m hosting offer Docker images.

It would also be great if someone could give me a quick-start guide for Docker. Thanks in advance!

dataprolet,
@dataprolet@lemmy.dbzer0.com avatar

Recent video that explains Docker very well: www.youtube.com/watch?v=rIrNIzy6U_g

festus,

I started self-hosting a bit prior to when Docker took off, and getting multiple services running was much harder. Service A wants a certain version of PHP installed with certain plugins while Service B wants a different version. You’d follow a tutorial for installing Service C and desperately hope that it wouldn’t somehow break Service A or B. You installed Service D for a bit despite all the installation pain and now want to uninstall it - I hope you tracked exactly what config changes you made throughout the system so you can undo it.

Docker fixed all of this by making each service independent through containers which made self-hosting 10x easier. I’d also add that I love how easy it is to transfer my setup to a new server - I keep all of my container volumes in a specific directory and my docker-compose files in another and that’s all I need to backup / transfer. Without Docker you’d have to specifically handle each & every configuration file and database location, and if you later upgrade to a newer version of the OS or a different distro you’d have to handle possible conflicts between your versions and what the distro expects.

sxan,
@sxan@midwest.social avatar

Docker of one version of software that uses Linux containers to encapsulate software and that software’s dependencies, while limiting that software’s access to the underlying OS. It’s chroot, but for more of the system. It can make running software that has a lot of moving parts and dependencies easier. It can also improve your security running that software.

For how-tos, watch one of the 875,936 YouTube tutorials, or read one of the 3 million text tutorials. Or ask ChatGPT, if you really need hand-holding.

buzz,
@buzz@lemmy.world avatar

There is like 5mln tutorials

bin_bash,
@bin_bash@lemmy.world avatar

Avoid Docker as much as possible. It gives you a quick way to run apps, but in a way that you can’t see what is going to be installed.

It is also double resource heater.

And it has many frequent vulnerabilities.

recapitated,

False.

zalgotext,

in a way that you can’t see what is going to be installed.

You can look at the Dockerfile and see every single step that goes into building a particular image.

It is also double resource heater.

That’s patently false.

And it has many frequent vulnerabilities

Dawg have you ever actually used docker?

excitingburp, (edited )

For your use case, consider it to be a packaging format (like AppImage, Flatpak, Deb, RPM, etc.) that includes all the dependencies (including services, not just libraries) for the app in question.

Should I change this?

If it’s not broken don’t fix it.

Use Podman (my preferred - the SystemD approach is awesome), containerd, or Incus. Docker is a graveyard of half-finished pet projects that have no reason for existing. Podman has a Docker-compatible socket, so 100% of Docker tooling will work with it.

ComradeKhoumrag,
@ComradeKhoumrag@infosec.pub avatar

I can add, podman was ignored in previous years at my day job because there were some reliability issues either with GPU access or networking I forget, however these issues have been resolved and we’re reimplementing it pretty much effortlessly

MashedTech,

Yep, we’re reconsidering it at work as well. it’s grown pretty nicely

MonkderZweite,

Vs. LXD?

turkishdelight,

Docker makes sense if you are deploying thousands of machines in the cloud. I don’t think it makes as much sense if you have your own hardware.

Some services do have 1-line installers with docker, so those might be useful. But they usually have 1-line non-docker installers too.

peter,
@peter@feddit.uk avatar

Docker still makes sense on your own hardware. Especially if you’re the type of person to try out different programs often

matcha_addict, (edited )

This blog post explains it well:

cosmicbyt.es/…/demistifying-containers-part-1/

Essentially, containers are means of creating environments in which you can run software, and those environments are:

  • isolated, which makes it a very controlled environment. Much harder to run into errors
  • reproducible: we have tools that reproduce the same container from an image file
  • easy to distribute: just have the container image.
  • little to no compromises on performance (at least on Linux)

It is essentially a way for you to run a program without having to worry how to set up the environment, why it didn’t work as expected, what dependencies you’re missing, etc.

BCsven,

Install Portainer, it helps you get used to managing docker images and containers before going full command line.

RootBeerGuy,
@RootBeerGuy@discuss.tchncs.de avatar

I actually prefer dockge, I only have a few containers and its a lot simpler while still able to do all the basics of docker management. Portainer was overkill for me.

ikidd,
@ikidd@lemmy.world avatar

I have a pile of containers both for selfhosting and for dev builds, and still wouldn’t use Portainer.

Lazydocker, FTW

Vendetta9076,
@Vendetta9076@sh.itjust.works avatar

I learned on portainer. I just wish it worked better. Dockge is a much better solution anyways

ryannathans,

A lot of people here really do be describing docker like flatpak

matcha_addict,

They’re similar under the good, but flatpak is optimized for desktop use. Docker targets server applications.

BellyPurpledGerbil,

It’s virtual machines but faster, more configurable with a considerably larger set of automation, and it consumes less computer resources than a traditional VM. Additionally, in software development it helps solve a problem summarized as “works on my machine.” A lot of traditional server creation and management relied on systems that need to be set up perfectly identical every deployment to prevent dumb defects based on whose machine was used to write it on. With Docker, it’s stupid easy to copy the automated configuration from “my machine” to “your machine.” Now everyone, including the production systems, are running from “my machine.” That’s kind of a big deal, even if it could be done in other ways naturally on Linux operating systems. They don’t have the ease of use or the same shareability.

What you’re doing is perfectly expected. That’s a great way of getting around using Docker. You aren’t forced into using it. It’s just easier for most people

modeler,

This is exactly the answer.

I’d just expand on one thing: many systems have multiple apps that need to run at the same time. Each app has its own dependencies, sometimes requiring a specific version of a library.

In this situation, it’s very easy for one app to need v1 of MyCleverLibrary (and fails with v2) and another needs v2 (and fails with v1). And then at the next OS update, the distro updates to v2.5 and breaks everything.

In this situation, before containers, you will be stuck, or have some difficult workrounds including different LD_LIBRARY_PATH settings that then break at the next update.

Using containers, each app has its own libraries at the correct and tested versions. These subtle interdependencies are eliminated and packages ‘just work’.

TDCN,
@TDCN@feddit.dk avatar

I can also add that if you want to run multiple programs that each have a web interface it’s easy to direct each interface to the port you want instead of having to go through various config files that are different for each program or worst case having to change a hardcoded port in some software. With docker you have the same easy config options for each service you want to run. Same with storage paths. Various software stores their files at seemingly random places. With docker you just map a folder and all you files are stored there without any further configs.

BellyPurpledGerbil,

I approve of this expanded answer. I may have been too ELI5 in my post.

If the OP has read this far, I’m not telling you to use docker, but you could consider it if you want to store all of your services and their configurations in a backup somewhere on your network so if you have to set up a new raspberry pi for any reason, now it’s a simple sequence of docker commands (or one docker-compose command) to get back up and running. You won’t need to remember how to reinstall all of the dependencies.

Lifebandit666,

I’ve used Docker a fair bit over the years because it’s a simple line of code I can copy/paste to get a simple web server running.

I ran Home Assistant Supervised in Docker for many years. It was a few lines of code and then I basically had Home Assistant OS running on my Pi without it taking over the whole Pi, meaning I could run other things on it too.

That ended when HA just died one day and I had no clue how to get it running again. I spent a day trying, then just installed HA OS on the Pi instead.

Anyway I now have a Dell Optiplex and Proxmox and I’ve gone back to Docker. Why? Well I discovered that I could make a Linux VM and install Docker on it, then add the Docker code to install a Portainer client to it, then make that into a template.

Meaning I can clone that template and type the IP address into Portainer and now I have full access to that Docker instance from my original Portainer container. That means I can bang a Docker Compose file into the “Stack” and press go, then tinker with the thing I wanna tinker with. If I get it working it can stay, if I don’t then I just delete the VM and I’ve lost nothing.

Portainer has made Docker way more accessible for me. I love a webui

towerful,

I use proxmox to run debian VMs to run docker compose “stacks”.
Some VMs are dedicated to an entire servicecs docker compose stack.
Some VMs are for a docker compose of a bunch of different services.
Some services are run across multiple nodes with HA VIPs and all that jazz for “guaranteed” uptime.
I see the guest VM as a collection, but there is only ever 1 compose file per host.
Has a bit of overhead, but makes it really easy to reason about, separate VLANs and firewall rules etc

AlexPewMaster,

What is Portainer? You’ve said that it’s a web UI, but what exactly does it provide you with?

Lifebandit666,

Well the webui provides me with a list of containers, whether they’re running or not, the ports that are opened by the containers. There’s Stacks which are basically Docker Compose files in a neat UI. The ability to move these stacks to other instances. There’s the network options and ability to make more networks, the files that are associated with the containers.

And not just for the instance I’m in, but for all the instances I’ve connected.

In my previous experience with Docker these are all things that I need to remember code to find, meaning I most often have to Google the code to find out what I’m after. Here is neatly packaged in a web page.

Oh and the logs, which are really useful when tinkering to try get something up and running

DontNoodles,

Does portainer, and docker in turn, allow taking/accessing something like point in time snapshots of containers like VM software do? They make it easy to tinker with stuff, knowing that if I mess up, I can go back to a snapshot and be good again.

Lifebandit666,

Not to my knowledge no

AlexPewMaster,

Sounds awesome! I’ve taken a look at Portainer and got confused on the whole Business Edition and Community Edition. What are you running?

Lifebandit666,

Community edition. It’s free!

Oisteink,

Docker can be many things - and portainer is a nice replacement for those using docker for running services. It’s got a great web interface. For automation and most development docker and compose is my pick. Also a good fit for those that only use X to spawn terminals.

ZeldaFreak,
@ZeldaFreak@lemmy.world avatar

Docker is amazing but not needed. You can compare it to a simpler VM. You can take a docker and run it on any machine. You have an environment that is separate from your host and you and the container can only access it via defined points (volumes and ports).

Imagine you need to run a 2nd Mumble Server. I never set on up but its often that a 2nd instance is not that easy. With docker its easy. The only difference is that you need to use different ports, when you have only one network access or you use a reverse proxy. You can create a 2nd instance to test stuff, without interrupting your productive system. Its a security benefit, because its isolated to some degree and you can remove one easily.

I started using it with MSSQL Server, because I hated how invasive it is on a windows machine, especially I just needed it temporarily to do stuff with it. I’m not a microsoft admin and I know that Servers from Microsoft are a different level. Docker allowed me to start and stop it and remove it very easily. After that I started using it for a lot of and brought my NAS on the next level.

Also one thing worth mentioning are Linux Containerx (LXC). They are in Proxmox but I have less knowledge. It feels more like a full VM than docker but uses less resources. This is the reason why containers in general are more popular. They are less resource hungry than a full VM but have some benefits than running everything on one machine. LXC feels more like a full system, than docker. With docker you rarely get into the system. You may execute some commands, like a create user command or a one time job but don’t access it via a shell from the inside (its possible). LXC on the other hand, you use the shell.

kevincox, (edited )
@kevincox@lemmy.ml avatar

I feel that a lot of people here are missing the point. Docker is popular for selfhosted services for a few main reasons:

  1. It is one package that can be used on any distribution (or even OS with a Linux VM).
  2. The package contains all dependencies required to run the software so it is pretty reliable.
  3. It provides some basic sandboxing against non-malicious services. Basically the service can’t scribble all over your filesystem. It can only write to specific directories that you have given it access to (via volumes) other than by exploiting security vulnerabilities.
  4. The volume system also makes it very obvious what data is important and needs to be backed up or similar, you have a short list.

Docker also has lots of downsides. I would generally say that if your distribution packages software I would prefer the distribution’s package over the docker image. A good distribution package will also solve all of these problems. The main issue you will see with distribution packages is a longer delay before new versions are made available.

What Docker completely dominates was previous cross-distribution packaging options which typically took one of the previous strategies.

  1. Self-contained compiled tarball. Run the program inside as your user. It probably puts its data in the extracted directory, maybe. How do you upgrade? Extract and copy a data directory? Self-update? Code is mutable and mixed with data, gross.
  2. Install script. Probably runs as root. Makes who-knows what changes to your system. Where is the data, is the service running? Will it auto-start on boot. Hope that install script supports your distro.
  3. Source tarball. Figure out the dependencies. Hope they don’t conflict with the versions your distro has. Set up users and setup scripts yourself. Hope the build doesn’t take too long.
CyberSeeker,

Sorry if I’m about 10 years behind Linux development, but how does Docker compare with the latest FlatPak trend in application distribution? How you have described it sounds somewhat similar, outside of also getting segmented access to data and networks.

towerful,

Docker is to servers, as flatpak is to desktop apps.
I would probably run away if i saw flatpak on a headless server

matcha_addict, (edited )

Flatpak has better security features than docker. While its true it’s not designed with server apps in mind, it is possible to use its underlying “bubblewrap” to create isolated environments. Maybe in the future, tooling will improve its features and bridge the gap.

kevincox,
@kevincox@lemmy.ml avatar

For desktop apps Flatpak is almost certainly a better option than Docker. Flatpak uses the same core concepts as Docker but Flatpak is more suited for distributing graphical apps.

  1. Built in support for sharing graphics drivers, display server connections, fonts and themes.
  2. Most Flatpaks use common base images. Not only will this save disk space if you have lots of, for example GNOME, applications as they will share the same base but it also means that you can ship security updates for common libraries separately from application updates. (Although locked insecure libraries is still a problem in general, it is just improved over the docker case.)
  3. Better desktop integration via the use of “portals” that allow requesting specific things (screenshot, open file, save file, …) without full access to the user’s system.
  4. Configuration UIs that are optimized for the desktop usecase. Graphically tools for install, uninstall, manage permissions, …

Generally I would still default to my distro’s packages where possible, but if they are unsuitable for whatever reason (not available, too old, …) then a Flatpak is a great option.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • selfhosted@lemmy.world
  • ethstaker
  • DreamBathrooms
  • everett
  • magazineikmin
  • Durango
  • InstantRegret
  • Youngstown
  • mdbf
  • slotface
  • rosin
  • thenastyranch
  • kavyap
  • GTA5RPClips
  • tacticalgear
  • JUstTest
  • cubers
  • cisconetworking
  • osvaldo12
  • khanakhh
  • ngwrru68w68
  • modclub
  • tester
  • anitta
  • normalnudes
  • Leos
  • provamag3
  • megavids
  • lostlight
  • All magazines