Docker

From Leo's Notes
Last edited on 7 July 2021, at 00:10.

Docker is an open source software project launched in March 2013 that automates the deployment of containers on Linux. The docker container engine manages and configures the necessary Linux kernel namespaces and features such as cgroups to run applications that are delivered in as immutable layered package format known as container images.

Docker makes it easy for users to run applications on their local machines regardless of how the system is set up since everything that is required to make the application run is bundled within the container image. Developers also benefit by easily building, testing, running, and delivering their applications in a well known environment, untainted by any external factors. Different versions of the same software can also be run simultaneously without risk of software conflicts too.

The surge in popularity with Docker has lead to the increase use in the idea of containerizing applications in general over the past few years. With the rise of orchestration tools such as Kubernetes, containers can now be managed by software. Containers can go up and down depending on load, response times, or hardware availability.

More recently, the Open Container Initiative (OCI) was launched to create an industry standard for container formats and runtimes. The Docker engine since version 1.11 began using the OCI compatible container lifecycle management tool called containerd which provides a standardized API for managing, transferring, and starting containers. Containerd uses runc that handles the spawning and running of containers according to the OCI specification. Other projects such as Podman and rkt follow the OCI standard but utilizing different underlying technologies different from containerd+runc. However, since they all follow the OCI spec, container images remain portable across different host environments.

Motivation behind Containers[edit | edit source]

Docker allows developers to clearly define the scope of the application, including:

  • The underlying OS, application and configuration files
  • Any required network ports, defined as container network ports
  • Any required filesystem mounts for persistent storage, defined as container volumes
  • Any dependency relationships between services that make up the application, defined as a service with orchestrators like docker-compose or Kubernetes.

Unlike with a traditional shared system, applications run within their own sandbox with a well-known system configuration untainted by any other applications that may also be running. Such a sandboxed environment avoids any potential for software conflicts or library version issues.

As a user or system operator, containers also provides an easy way to start a specific version of an application without needing to manually install or compile then configure the software. Using Docker, it's as simple as finding the appropriate docker image and version from one of the many publicly available container registries (such as Docker Hub) and then starting the container. If no public docker image is available, you can build a custom Docker image with a Dockerfile which outlines the steps needed to build a container from an existing base image (such as a clean OS image) or from scratch.

By either downloading a container image or by building one with a Dockerfile, the process of starting the application is well known and repeatable with no dependencies on the running system or other software and allows for repeatable and reproducible application deployments. Applying this idea further, we can easily deploy applications using this method on multiple machines and then set up a network load balancer across these servers to provide high availability which is what container orchestration tools such as Kubernetes allow you to do.

Quick Usage[edit | edit source]

A cheat sheet of some sort.

Command Description
docker images Lists docker images that you created or pulled using docker pull
docker pull Pulls a docker image from a registry (can be from docker.io or a private registry)
  • Version numbers can be given after a colon.
docker load Loads a docker image from file. Eg. docker load -i my_image.tar.
docker run Runs a command in a container. Eg. docker run alpine ls -l
  • For commands that require an interactive shell, pass in -it.
  • Pass in --name for a custom name instead of a long hash
  • Pass environment vars using -e
  • Pass -d to detach immediately
  • Pass -P to specify the image
docker exec Starts another process in the specified container. Eg: docker exec -ti container-name bash
docker ps Lists containers that are running.

Pass -a to see all containers that executed.

docker port shows the port mapping between the container and the host.
docker stop Stop a container by sending SIGTERM, then waiting for 10 seconds, then sending SIGKILL. Time can be changed with --time.
docker kill Stops a container by sending SIGKILL immediately. Signal can be set with --signal.
docker rm Removes a container. Use -f to stop and remove. Any volumes associated with the container remain unless the -v flag is given.

Cleaning Up[edit | edit source]

Pulled from: https://zaiste.net/posts/removing_docker_containers/

docker ps -aq -f status=exited List all exited containers
docker ps -aq --no-trunc | xargs docker rm Remove stopped containers. This command will not remove running containers, only an error message will be printed out for each of them.
docker images -q --filter dangling=true | xargs docker rmi Remove dangling/untagged images
docker ps --since a1bz3768ez7g -q | xargs docker rm Remove containers created after a specific container
docker ps --before a1bz3768ez7g -q | xargs docker rm Remove containers created before a specific container
Use --rm for docker build Use --rm together with docker build to remove intermediary images during the build process.

Installation[edit | edit source]

Install docker using your distro's package manager and then start the service.

CentOS[edit | edit source]

# yum install -y yum-utils
# yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo

# yum install docker-ce docker-ce-cli containerd.io

Fedora[edit | edit source]

On a Fedora distro (https://docs.docker.com/engine/installation/linux/docker-ce/fedora/#install-using-the-repository):

# dnf config-manager --add-repo  https://download.docker.com/linux/fedora/docker-ce.repo
# dnf install docker-ce
# systemctl start docker

If installed properly, the docker hello-world command should work without issue.

# docker run hello-world

To play with docker, you will need to run things as root or be part of the docker group that has access to docker's socket. To allow normal users access to docker, add them to the docker group:

# usermod -aG docker username

See more here: https://docs.docker.com/engine/installation/linux/linux-postinstall/

Fedora 31[edit | edit source]

https://forum.linuxconfig.org/t/how-to-install-docker-on-fedora-31-linuxconfig-org/3605/2

# docker run ...
 ---> Running in b35c44c21ad7
OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:297: applying cgroup configuration for process caused \"open /sys/fs/cgroup/docker/cpuset.cpus.effective: no such file or directory\"": unknown

Reverting back to cgroup v1 using:

# grubby --update-kernel=ALL --args="systemd.unified_cgroup_hierarchy=0"
# reboot

or edit the grub config manually:

# vi /etc/default/grub
## Add systemd.unified_cgroup_hierarchy=0 to the GRUB_CMDLINE_LINUX value
# grub2-mkconfig -o /boot/grub2/grub.cfg
# reboot

Introduction[edit | edit source]

There are 3 main components to Docker and can be managed through the docker command line utility.

  1. Images: A docker-based container image contains the application and its environment
  2. Registries: A docker image repository. After building an image, it can be uploaded (pushed) and later downloaded (pulled) by others.
  3. Containers: A regular Linux container running on the docker host.

My normal usage with Docker typically does not involve using the docker command, except to list or enter a container. Utilities such as docker-compose makes creating and managing docker containers extremely easy as all container configuration is defined inside a single YAML file. Container networking is also made trivial with docker-compose as it also handles the creation of docker networks and linking of the containers to these networks.

For more complex applications, consider using an orchestrated system such as Kubernetes.

Images[edit | edit source]

An image contains all the necessary files to start an application. This includes all system files, libraries, application executables, and even configuration files (though, configuration files are typically injected through volumes to customize a container's functionality. More on this later.). Ideally, an image should only contain just enough files to get the application working to minimize the image size as well as reduce attack surfaces.

Docker images are a series of filesystem layers that inherits and modifies files from the preceding layer. When a container is started from an image, all layers are combined using an overlay filesystem to create a temporary sandbox filesystem for the container to execute in. Any changes made to this temporary filesystem will be lost when the container is ever deleted. An exception to this is if modifications to this temporary filesystem is saved as another layer using docker commit, but this is not recommended as it defeats the benefits of docker images.

Applications that require persistent storage, such as a database server for instance, should have the data files stored on a persistent volume so that when the container is recreated (such as when an updated image is pulled), the application can still access its data files independent of the image.

To list all images on your system:

# docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
ubuntu              latest              2d696327ab2e        2 weeks ago         122MB
alpine              latest              76da55c8019d        3 weeks ago         3.97MB
hello-world         latest              05a3bd381fc2        3 weeks ago         1.84kB

To create a container, start a program with one of these images using the docker run command. A few common arguments that are passed to docker run are:

  • -ti - terminal interactive (for interactive shells only)
  • -d - start container detached; use docker attach <image id or name> to attach.
  • --name - provides a name rather than the random one that is generated.
  • --rm - removes the container after it stops (ie. the container application (in this example the shell) exits)

For example, to start a bash shell in the ubuntu image:

# docker run -ti --rm ubuntu:latest bash
root@c148fbe71597:/# cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=16.04
DISTRIB_CODENAME=xenial
DISTRIB_DESCRIPTION="Ubuntu 16.04.3 LTS"

When you exit, the container will exit as well. You can see all running docker containers using the docker ps command. After quitting, our container is in a 'Exited' status. Each container has a randomly generated name. When referencing a container, you can either use the container ID or the randomly generated name.

Common arguments to docker ps are:

  • -a - To see all containers including ones that have stopped
  • -l - Print the last stopped container
# docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                       PORTS               NAMES
3ab813a72c70        ubuntu:latest       "bash"              5 minutes ago       Exited (130) 5 minutes ago                       agitated_dubinsky
2c4251af3005        alpine:latest       "bash"              6 minutes ago       Created                                          eager_murdock
d9f2d1a0025c        hello-world         "/hello"            29 minutes ago      Exited (0) 29 minutes ago                        musing_lewin

If you wish to re-enter the exited container, start it back up with:

# docker start 3ab813a72c70
# docker exec -ti 3ab813a72c70 sh

The changes you made inside a container can be converted into another image using the docker commit command. This is not recommended if your goal is to make containers that can be recreated from scratch (ie. strictly from Dockerfiles). There are two ways to go about this:

Manually commit the container ID (the hash), and then tag the image ID with an image name.
# docker commit <container id>
returns <image id>
# docker tag <image id> <image tag>
Or, commit the container using the randomly generated name and the image name.
# docker commit <container name> <image tag>
returns <image id>

Containers[edit | edit source]

Dealing with Stopped Containers[edit | edit source]

A container is stopped if the first processes in the container terminates (whether intentionally, segfaults, or if a stop/kill signal was sent to it). This is similar to how killing init on a Linux system would stop the system.

To stop a running container, use either the docker stop or docker kill commands.

A stopped container can also be started again using docker start. Interactive commands (like a bash shell) will require the -ai flags.

Troubleshooting a stopped container can be done by viewing any generated output generated by the container with the docker log command. While all text output generated by a container is logged by docker, it is still a good idea to rely on file logs for performance (docker log can be slow).

Networking[edit | edit source]

Docker provides networking to containers by providing something similar to a port forwarding between the host and the container. Inside port and outside port mappings are provided by the -p in:out/proto argument when starting a container.

The inside port is required. If no outside port is provided, docker will use the next available port on the host system. If no protocol is provided, docker will use TCP.

For example:

docker run --rm -ti -p 80:8080 web-server bash

A service listening on TCP port 80 inside the container will be accessible through TCP port 8080 on the host.

Side note: You can make docker containers listen only to the host by running something similar to docker run -p 127.0.0.1:1234:1234/tcp. Docker will forward host only traffic on tcp 1234 to the container.

When 'exposing' a particular port on a container, docker sets up the standard network model:

  1. Create a veth/eth interface pair
  2. Connect veth to the docker bridge (docker0)
  3. Add eth to the docker container namespace
  4. Assign an IP address on eth (within same network as docker0)
  5. Create any port forwarding necessary to 'expose' ports (eg. append to the DOCKER IPTable chain).

Linking[edit | edit source]

Docker provides a way for containers to 'link' to other containers on the same host. By linking one container to another, applications can talk to other applications in a separate container directly. When starting a container linked to another container, Docker will automatically update the /etc/hosts file and set up networking such that applications can talk to the other container directly without requiring traffic needing to route through the host.

Example:

On the 'server':
# docker run -ti --rm --name server ubuntu bash
root@85bdf83d4477:/# cat /etc/hosts
127.0.0.1       localhost
::1     localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.2      85bdf83d4477
On the 'client':
# docker run --rm -ti --link server --name client ubuntu bash
root@60e2d5ad2dbc:/# cat /etc/hosts
127.0.0.1       localhost
::1     localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.2      server 85bdf83d4477
172.17.0.3      60e2d5ad2dbc

The client can connect to 'server' directly by hostname.

This may be brittle, especially if containers are stopped/restarted anytime in the future separately.

Dynamic Linking[edit | edit source]

Containers can be linked together by creating a docker network. This is similar to a private network between two or more containers.

To create a network and use it:

# docker network create example
# docker run --rm -ti --net=example ubuntu bash

Unlike linking in the previous section, container names are looked up using the docker DNS server. Starting and restarting containers will not break lookups since the DNS will always have the most up to date information (verses the static /etc/hosts file that gets generated on startup). In the example above, none of the containers that start on the 'example' network will be in the container's hosts file, but the hostnames will still resolve.

Docker networks can be listed using docker network ls and its configuration as JSON using docker network inspect example.

Image Building with Dockerfiles[edit | edit source]

A Docker image contains all the necessary files an application needs to run. Container images can either be built from a blank slate (from scratch) or be built upon another existing image (such as one containing an operating system or application). How such a container image should be built is outlined in a Dockerfile. A Dockerfile defines all the steps required to build a container image. It specifies the base image to use, what steps are needed ontop of this existing image, and any other options such as volumes or exposed network ports that are needed to make the container run appropriately.

With a Dockerfile created, an image can then be built using the docker build command on a directory containing the Dockerfile. Optionally, a tag which specifies the version of the image can be set and then pushed to a container registry for others to use. A Dockerfile should contain at the very least the base image, some intermediate commands to run to set the application up, and a command that should be executed when the container starts up. For example:

# The base image to use (eg. the operating system, or service)
FROM baseimage:version

# Things to do on the base image. This can contain any number of RUN or COPY commands
# Each RUN or COPY command will generate a new docker image.
RUN yum install any-dependencies

# Expose any ports that are required
EXPOSE 80

# run the application. Use an ENTRYPOINT script for complex images
CMD ["python", "/some/app.py"]

Each step builds on top of the results from the previous step. A new overlay filesystem will be created for all steps using RUN, ADD, and COPY. All other instructions will only modify the container's metadata.

To build lightweight container images, it's best to keep the number of overlay layers as low as possible. High number of layers will result in a bloated image and possibly slower performance when an application needs to do many reads/writes on the container filesystem.

Reducing the number of layers can be done by combining as many steps together as possible. For example, multiple RUN commands can be combined into one by joining with &&. Multiple ADD or COPY can be done by moving all files into a separate directory so that the entire contents of the directory can be copied as one command.

The size of the image itself can be reduced by removing unnecessary files such as source code, caches, or temporary files.

When rebuilding an image, docker can make use of cached images to help speed up the process. When debugging or developing a new image, keep frequently changing operations near the end of the Dockerfile.

To build the Dockerfile in the current directory (.):

# docker build -t name-of-result .

After building, the resulting image will be on the local system (run docker images) and can be used immediately by referencing the image with its tag name.

Images can be pushed to a remote registry using docker push.

Container Image Security[edit | edit source]

A few things to keep in mind when building container images:

  1. Do not store secrets in environment variables defined in a Dockerfile or copied into the container itself. These will be baked into the image when built and anyone with access to the image can retrieve your secrets. Instead, these should be passed in to the container via volumes or as container environment variables.
  2. Use a trusted base image pinned to a specific version. If you rely on someone else's base image, you are trusting that the underlying system they provide is clean. Images from Dockerhub without a namespace are considered trustworthy. Pinning a version ensures reliability and repeatability when building.
  3. Avoid external dependencies. This includes curl bashing (curl remote-server| bash) or ADDing remote files from the internet during build time. To some extent, this also applies to packages that are installed with package managers during the setup process, but this may be difficult to avoid completely.
  4. While Docker doesn't allow for non-root containers, your container should avoid running as root when possible. Changes that need to be made to the base image should be done during build time and once the container starts, it should only have regular user access to the system. This may cause issues with services that need to bind to lower ports but can be mitigated by having it bind to high port numbers and have the container remap port numbers instead.

Management and Orchestration Tools[edit | edit source]

Some container management or orchestration tools:

Tasks[edit | edit source]

Container accessing a host service[edit | edit source]

To allow the container access to a service that's running on the host., put the container as part of the docker_gwbridge network. Using Docker Compose, this is done by adding the container to the docker_gwbridge network from external.

grafana:
   image: grafana/grafana
   networks:
     - traefik
     - dockernet

 dockernet:
   external:
     name: docker_gwbridge

Ensure that the service running on the host is listening on an address from the docker_gwbridge interface.

Overriding Entrypoint in docker run[edit | edit source]

To override an image's default entrypoint, pass in the --entrypoint parameter.

# docker run -ti --rm --entrypoint "/bin/sh" image:tag "-i"

Resource Constraints[edit | edit source]

# docker run --memory max-memory
# docker run --cpu-shares (relative to other containers)
# docker run --cpu-quota (limit in general)

Custom Unsecured Registry[edit | edit source]

You may run your own local registry (as a Docker container no less), but unless you make it secure with a signed certificate, you need to edit /etc/docker/daemon.json to contain your domain:

{
        "insecure-registries": ["registry.steamr.com:5000","registry:5000"]
}

Changing Docker IP Subnets[edit | edit source]

Docker will use the 172.17.0.0/16 for its internal networking. To run docker on networks that utilizes this network range, you will need to change docker's bridge IP address.

# ip a
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
    link/ether 02:42:68:e8:2f:99 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever

To change docker's bridge IP address, stop the docker service (systemctl stop docker and then edit /etc/docker/daemon.json (Create it if it doesn't exist) with the following:

{
  "bip": "192.168.252.1/22"
}

Restart the docker service using systemctl start docker.

Ensure that docker has updated the docker bridge IP and also the IPTable masquerade rules.

# iptables -t nat -L

You can also manually clear the docker rules:

# systemctl stop docker
# iptables -t nat -F POSTROUTING
# ip link set dev docker0 down
# ip addr del 172.17.0.1/16 dev docker0
# ip addr add 192.168.252.1/22 dev docker0

See Also: https://docs.docker.com/engine/userguide/networking/default_network/custom-docker0/

DNS within Containers[edit | edit source]

All containers will use the embedded docker DNS server at 127.0.0.11 which is provided by the docker engine. The embedded DNS server allows for service discovery for containers that are asliased by link or shares the same network (in the case of docker-compose).

When docker starts, it will look for a valid nameserver in the hosts's /etc/resolv.conf file. If none is found, it will automatically use the public 8.8.4.4 and 8.8.8.8 DNS servers which could potentially cause issues in your environment if for instance, outgoing DNS is blocked or if there is a distinction between internal and external DNS zones.

This can be configured with the /etc/docker/daemon.json configuration file by defining an entry for dns and then restarting docker.

{
        "dns": ["10.1.0.10"]
}

If you decide to set a container's DNS server with --dns=x.x.x.x, it will overwrite /etc/resolv.conf with that DNS server and will break service discovery.

See Also: https://docs.docker.com/v17.09/engine/userguide/networking/configure-dns/

docker_gwbridge[edit | edit source]

See: https://docs.docker.com/engine/swarm/networking/#customize-the-docker_gwbridge The docker_gwbridge is a virtual bridge that connects the overlay networks (including the ingress network) to an individual Docker daemon’s physical network. Docker creates it automatically when you initialize a swarm or join a Docker host to a swarm, but it is not a Docker device. It exists in the kernel of the Docker host. If you need to customize its settings, you must do so before joining the Docker host to the swarm, or after temporarily removing the host from the swarm.

# docker network inspect docker_gwbridge

Stop or disconnect all containers attached.

# docker network disconnect --force docker_gwbridge gateway_ingress-sbox

Then, remove and recreate the bridge:

# docker network rm docker_gwbridge
# docker network create \
--subnet 192.168.250.0/22 \
--opt com.docker.network.bridge.name=docker_gwbridge \
--opt com.docker.network.bridge.enable_icc=false \
--opt com.docker.network.bridge.enable_ip_masquerade=true \
docker_gwbridge


NFS volumes in Docker

The docker-volume-netshare Docker plugin (https://github.com/ContainX/docker-volume-netshare) automates the mounting/unmounting of NFS mounts as containers come and go. This provides a way to mount persistent volumes via NFS.

The easiest way to get this is to download one of their releases. Alternatively, you may compile it within a golang:latest container:

# docker run --rm -ti -v ./build:/build golang:latest bash
$ go get github.com/ContainX/docker-volume-netshare
$ cp /go/bin/docker-volume-netshare /build

Once you have the binary on hand, set up a systemd service to start it automatically.

On CentOS:

#!/bin/bash

cp docker-volume-netshare /usr/bin/docker-volume-netshare

echo "DKV_NETSHARE_OPTS=nfs" > /etc/sysconfig/docker-volume-netshare

cat <<EOF > /usr/lib/systemd/system/docker-volume-netshare.service
[Unit]
Description=Docker NFS, AWS EFS & Samba/CIFS Volume Plugin
Documentation=https://github.com/gondor/docker-volume-netshare
After=nfs-utils.service
Before=docker.service
Requires=nfs-utils.service


[Service]
EnvironmentFile=/etc/sysconfig/docker-volume-netshare
ExecStart=/usr/bin/docker-volume-netshare \$DKV_NETSHARE_OPTS
StandardOutput=syslog

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable docker-volume-netshare
systemctl start docker-volume-netshare

On Ubuntu/Debian, the sysconfig directory is located in /run/sysconfig. Adjust the paths above as required.

TFTP inside Docker[edit | edit source]

If you're going to run a TFTP server, you need ip_conntrack_tftp and ip_nat_tftp loaded. Otherwise, the firewall will just block everything:

20:31:02.362060 IP 10.1.3.3.2070 > 172.20.0.2.69:  28 RRQ "/pxelinux.0" octet tsize 0
20:31:02.363084 IP 172.20.0.2.49158 > 10.1.3.3.2070: UDP, length 14
20:31:02.363683 IP 10.1.3.3.1024 > 172.20.0.2.49158: UDP, length 17
20:31:02.363742 IP 172.20.0.2 > 10.1.3.3: ICMP 172.20.0.2 udp port 49158 unreachable, length 53
20:31:02.363856 IP 10.1.3.3.2071 > 172.20.0.2.69:  33 RRQ "/pxelinux.0" octet blksize 1456
20:31:02.364482 IP 172.20.0.2.49159 > 10.1.3.3.2071: UDP, length 15
20:31:02.365135 IP 10.1.3.3.1024 > 172.20.0.2.49159: UDP, length 4
20:31:02.365211 IP 172.20.0.2 > 10.1.3.3: ICMP 172.20.0.2 udp port 49159 unreachable, length 40
20:31:03.364295 IP 172.20.0.2.49158 > 10.1.3.3.2070: UDP, length 14
20:31:03.365624 IP 172.20.0.2.49159 > 10.1.3.3.2071: UDP, length 15
20:31:03.366342 IP 10.1.3.3.1024 > 172.20.0.2.49159: UDP, length 4
20:31:03.366405 IP 172.20.0.2 > 10.1.3.3: ICMP 172.20.0.2 udp port 49159 unreachable, length 40

Automatically restart unhealthy containers[edit | edit source]

Without using any fancy orchestration system (such as docker swarm, or kubernetes), the simplest way to get unhealthy containers to restart is to create a cronjob:

*/5 * * * * docker restart  $(docker ps | grep unhealthy | cut -c -12) 2>/dev/null

Pruning old logs[edit | edit source]

Truncate all docker logs by running:

# truncate -s 0 /var/lib/docker/containers/$1*/*-json.log

Tips & Tricks[edit | edit source]

  • To detach from a container, use the key combination Ctrl+P Ctrl+Q.
  • To run another program in an existing container, use the docker exec command with the same commands as run.
  • Don't fetch dependencies when containers start. Instead, make it part of the image.
  • Tags if not provided, it will default to 'latest'

Troubleshooting[edit | edit source]

Fedora 31[edit | edit source]

When using docker under Fedora 31, you may get:

Error response from daemon: cgroups: cgroup mountpoint does not exist: unknown.

You will need to enable backward compatibility for cgroups with:

# grubby --update-kernel=ALL --args="systemd.unified_cgroup_hierarchy=0"

More information found at https://fedoraproject.org/wiki/Common_F31_bugs#Other_software_issues

Alpine Linux[edit | edit source]

After a server crash, docker couldn't start. /var/log/docker.log showed an error related to boltdb panic: invalid page type: 7: 10. This was only fixed by removing /var/lib/docker/containerd/daemon/io.containerd.metadata.v1.bolt/meta.db and then restarting docker.

Questions & Answers[edit | edit source]

Some questions I had before I knew anything about Docker and answers afterwards.

How does one run a newer distro on an old host kernel that may have a different kernel ABI or different sets of available syscalls?
For the most part, the kernel system calls are the same. There should be no problems running older distros since the kernel retains backwards compatibility. Newer distros may be an issue however.
What are the security implications of root inside the container? Can they break out of the container context?
This is one of the downsides to Docker as the engine requires root and could be a security issue if somehow the container can gain access to the docker engine. For the most part, this is safe if you do not mount the docker socket to the container or make the container have extra privileges to the host.
https://github.com/rootwyrm/lxc-media/tree/master/systemd_transmission
https://www.reddit.com/r/freebsd/comments/5vfj3w/docker_vs_jails/
How are updates to the base image done? Does it automatically update containers relying on it? Or will containers need to be rebuilt again?
Images are immutable. Updates to an image will have a different hash even if it has the same tag (eg. latest).
If an image has been updated, pull it with docker pull to obtain the most recent copy from the remote registry.
When a container using this image is restarted, it should use the newest image and any data not on persistent volumes will be lost.
How is data kept persistent? Does it need to be copied out/in every time it is to be rebuilt?
Persistent data should be saved outside of the container using persistent volumes. eg. -v outside/path:inside/path.
Persistent volumes can have multiple backings, whether it be saved on the host system or remotely with NFS.
NFS access from the container?
You must use something like docker-volume-netshare in order to have NFS volumes in the docker container. It does not seem like you can map an existing NFS volume to a docker volume.
Domain trust - is it the same as a regular VM?
If your application needs the operating system to be on the domain, you're probably doing it wrong. Ideally, your application should be using just LDAP without the need for a trust.
If you absolutely need domain trust in the container, keep in mind that you might need to rejoin when the container is rebuilt (say if a new image is available) or when it eventually loses its trust.

See Also[edit | edit source]

Nginx reverse proxy for HTTPS: