Reference: Docker Setup and Storage Management
Why use Docker for deployment, how to install it on a Linux server, and best practices for managing application data.

Docker has revolutionized software deployment by enabling lightweight, portable, and scalable application environments. Whether youâre running a single application or orchestrating multiple services, Docker simplifies development, testing, and deployment processes.
On many of the guides provided by this site, we will use Docker to deploy and maintain our self-hosted services. This Mini-Guide provides a central location for all relevant information about Docker, so it can be referenced from other articles on the site. Here we will cover why Docker is essential for deployment, how to install it on a Linux server, and how to configure persistent storage for application data.
If you are using Unraid, your server already has Docker installed and your application data will typically be stored under /mnt/user/appdata/
. You may still find the rest of this guide useful as reference.
Short Version (TL;DR)
- Install Docker on the server following the installation instructions on the official site. Or, you could use their magic script to instead (it worked for me!):
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh ./get-docker.sh --dry-run
- Set the Docker engine to auto-start after a reboot:
sudo systemctl enable --now docker
- Create and configure the application data location:
sudo mkdir -p /srv/appdata/
sudo chown -R $USER:$USER /srv/appdata/
Why Use Docker For Deployment?
Docker helps developers build, run, and manage applications using containers. Containers are standardized units that contain everything an application needs to run. Using Docker it is possible to deploy a collection of services using a single configuration file (docker compose) and spin them up and down with minimal effort. This approach ensures that a full service stack stack is easy to deploy, update, and maintain.
Here's a summary of some of the benefits that we get from using Docker for our self-hosted service deployments:
- Portability Across Environments. One of the biggest challenges in software deployment is the classic âit works on my machineâ problem. Docker eliminates this by packaging applications and their dependencies into self-sufficient containers. This ensures that the application behaves the same way in development, testing, and production, regardless of the underlying operating system.
- Efficient Resource Utilization. Unlike virtual machines (VMs), which require an entire OS for each instance, Docker containers share the host OS kernel, making them significantly lighter. This allows for better performance, faster startup times, and lower resource consumption, making it possible to run more applications on the same infrastructure. For home lab environments running on Raspberry Pi, low-power mini PCs, or NAS devices, Dockerâs lightweight nature is a big advantage.
- Simplified Dependency Management. With Docker, all dependenciesâlibraries, configurations, and environment settingsâare bundled into a single image. This eliminates the hassle of installing packages manually on different servers and ensures that deployments are consistent and reproducible.
- Isolation and Security. When self-hosting services exposed to the internet, security is a major concern. Docker isolates applications, reducing the risk of system-wide vulnerabilities, while also preventing conflicts between services running on the same host. This ensures better security and stability, as an issue in one container doesnât affect others.
- Easy Service Management. With Docker, you can easily deploy, update, and remove services without affecting the rest of your setup. Running applications like Nextcloud, Immich, or Plex becomes much simpler than dealing with traditional manual installations.
- Simplified Networking. Dockerâs built-in networking features make it easy to isolate services or link multiple containers together. For example, you can run a reverse proxy (like Traefik) to route traffic between your self-hosted services securely.
Setting up Docker On Linux
Install Docker on Linux (If Not Already Installed)
In order to deploy containers on our server, we need to have Docker and Docker Compose. We may already have these tools installed. We can check if this is the case running the following commands: docker version
and docker compose version
. If either of those commands fail, we need to install or update our version of Docker. The steps may be different depending on the Linux distribution running on our server. To complete this setup step, follow the installation instructions on the official site.
If you don't know what Linux distribution is running on the server, you can try one of the following commands to print out the information:cat /etc/os-release
orhostnamectl
Set Docker to Auto Start After a Server Reboot
In the event our server is restarted, for example after an OS update, we want all our self-hosted services to start automatically. Docker can take care of restarting each of our containersâwe can configure that in our docker-compose.yml
file by setting the restart: always
policy on each service. However, that requires that the Docker engine itself starts automatically after the server reboots. The following command will get that done:
sudo systemctl enable --now docker
Application Data Storage Location
When running applications in Docker, itâs important to ensure that your application data is stored separately from the container itself. By default, Docker stores volumes in /var/lib/docker/volumes/
, but for better organization, redundancy, and backup management, itâs preferable to define a dedicated storage location.
An important benefit of using a location that is independent from Docker stored volumes is that we can define this location as a mount point that is backed by a redundant storage layer. For example, we could create a ZFS dataset with parity and use the dataset mount point as our application data location.
The guides on this site assume all of our containers store their application data under /srv/appdata/
. If you decide to use a different path, be careful if you copy-paste content from the guides, and update the references to the application data location as appropriate.
What Is Considered Application Data?
Not all the information created and used by containers will need the persistent storage provided by our application data location. Many containers will produce ephemeral files which can be recreated safely every time the container restarts or is updated, as is the case of some cache and log files, for example.
The type of information that will be stored in our application data location will typically include:
- Configuration (text) files that we'll need to create or modify as part of managing our services.
- Database files, usually created and managed by the container, that store persistent information, typically user generated data. For example, Grafana will store dashboards in a SQLite database file.
- Externally managed content will also need to be stored in this location. For example, if we use Caddy to host a static website, the
www
folder with the website content will be stored in the application data location.
Create And Configure Our Application Data Storage
We are going to use /srv/appdata/
as the top-level directory for all of our container persistent data. Each service will have a folder dedicated to them in this location, using the name of the container as the folder name. For example, our Nextcloud deployment will use /srv/appdata/nextcloud/
as its application data storage location.
The only thing we need to do, is create the parent /srv/appdata/
directory with the right permissions:
sudo mkdir -p /srv/appdata/
sudo chown -R $USER:$USER /srv/appdata/
Use Portainer to Your Containers
After following this guide you are now ready to start deploying applications using Docker and Docker Compose using the CLI (Command-Line Interface). However, there is an easier way to manage your containers. With Portainer you can deploy, inspect, manage and update your applications from a web browser. The guides in this site will often provide application templates that can be installed with 1-click. You should check it outâand we have a guide for that:
What's Next?
In future guides we will explore a few advanced topics that we can take to improve improve our experience with Docker, including:
- More resilient application data storage with ZFS. By using 2 or more hard drives to create a ZFS dataset with parity, we can define
/srv/appdata/
as a mount point for our dataset. This makes our storage layer resilient to a hard drive failure. - Easier container management with Portainer. Docker has an easy to use CLI (Command Line Interface) to setup and manage our containers. Portainer takes that to the next level with a user-friendly container management web-based UI. With Portainer you can create or modify, start and stop, and inspect your containers from your web browser rather than through a terminal and SSH. Follow our Guide to Portainer and get it up and running in just a couple minutes.
- Simpler configuration using code-server. Just like Portainer gives us a web-based interface to manage our Docker containers, code-server gives us a web-based Visual Studio Code instance with our application data location as the root workplace. If we add a reverse proxy (like Traefik) and Tailscale on top, we can now conveniently and securely manage our configuration files using a web browser from anywhere in the world.