Docker is a tool that allows applications to run inside sandboxed system environments called containers on the host system. Applications running within docker are contained within their own system environments which includes all required dependencies to make the application work. By bundling application dependencies together into a single image, applications with conflicting dependencies can be placed and deployed in separate containers.

Docker containers leverage the Linux Control Groups feature (cgroups) to isolate applications from each other and from the host system and is comparable to chroot or BSD jails on steroids. Containers on Linux can be done with alternatives such as rkt or by manually creating cgroups.

Why[edit]

Docker allows developers to clearly define the scope of the application (in terms of where its data and configuration files are, its network settings, any required file mounts, and the base operating system) as well as the dependency relationships between components (ie. services) of the application.

Unlike a traditional server or a virtual machine environment, a clean operating system environment can be guaranteed when setting up an application since docker limits all changes made within a container image and not to the host environment.

Because of Docker's popularity, you can find virtually any application or service as a docker container. If your application requires a database server (of some particular version), simply pull the desired version of the image (eg. mysql:5.5, or postgres:latest) from a docker registry and run it. With docker, there is no longer a need to spin up a separate VM and manually install and configure the service because everything is contained within the image. Persistent storage for containers is optional and can be a local directory on the host or some external storage resource such as NFS.

Depending on the service, it's also possible to spin up docker applications across multiple docker hosts for load balancing and redundancy with Docker Swarm or some other orchestrator (like Kubernetes)

Installation[edit]

Install docker using your distro's package manager and then start the service.

On a Fedora distro (https://docs.docker.com/engine/installation/linux/docker-ce/fedora/#install-using-the-repository):

# dnf config-manager --add-repo  https://download.docker.com/linux/fedora/docker-ce.repo
# dnf install docker-ce
# systemctl start docker

If installed properly, the docker hello-world command should work without issue.

# docker run hello-world

To play with docker, you will need to run things as root or be part of a group that has access to docker's socket.

See more here: https://docs.docker.com/engine/installation/linux/linux-postinstall/

Introduction[edit]

There are 3 main components to Docker and can be managed through the docker command line utility.

  1. Images: A docker-based container image contains the application and its environment
  2. Registries: A docker image repository. After building an image, it can be uploaded (pushed) and later downloaded (pulled) by others.
  3. Containers: A regular Linux container running on the docker host.

My normal usage with Docker typically does not involve using the docker command, except to list or enter a container. Utilities such as docker-compose makes creating and managing docker containers extremely easy as all container configuration is defined inside a single YAML file. Container networking is also made trivial with docker-compose as it also handles the creation of docker networks and linking of the containers to these networks.

For more complex applications, consider using an orchestrated system such as Kubernetes.

Images[edit]

An image contains all the necessary files to start an application. This includes all system files, libraries, application executables, and even configuration files (though, configuration files are typically injected through volumes to customize a container's functionality. More on this later.). Ideally, an image should only contain just enough files to get the application working to minimize the image size as well as reduce attack surfaces.

Docker images are a series of filesystem layers that inherits and modifies files from the preceding layer. When a container is started from an image, all layers are combined using an overlay filesystem to create a temporary sandbox filesystem for the container to execute in. Any changes made to this temporary filesystem will be lost when the container is ever deleted. An exception to this is if modifications to this temporary filesystem is saved as another layer using docker commit, but this is not recommended as it defeats the benefits of docker images.

Applications that require persistent storage, such as a database server for instance, should have the data files stored on a persistent volume so that when the container is recreated (such as when an updated image is pulled), the application can still access its data files independent of the image.

To list all images on your system:

# docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
ubuntu              latest              2d696327ab2e        2 weeks ago         122MB
alpine              latest              76da55c8019d        3 weeks ago         3.97MB
hello-world         latest              05a3bd381fc2        3 weeks ago         1.84kB

To create a container, start a program with one of these images using the docker run command. A few common arguments that are passed to docker run are:

  • -ti - terminal interactive (for interactive shells only)
  • -d - start container detached; use docker attach <image id or name> to attach.
  • --name - provides a name rather than the random one that is generated.
  • --rm - removes the container after it stops (ie. the container application (in this example the shell) exits)

For example, to start a bash shell in the ubuntu image:

# docker run -ti --rm ubuntu:latest bash
root@c148fbe71597:/# cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=16.04
DISTRIB_CODENAME=xenial
DISTRIB_DESCRIPTION="Ubuntu 16.04.3 LTS"

You can see all running docker containers using the docker ps command. Common arguments to docker ps are:

  • -a - To see all containers including ones that have stopped
  • -l - Print the last stopped container
# docker ps -a
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                       PORTS               NAMES
3ab813a72c70        ubuntu:latest       "bash"              5 minutes ago       Exited (130) 5 minutes ago                       agitated_dubinsky
2c4251af3005        alpine:latest       "bash"              6 minutes ago       Created                                          eager_murdock
d9f2d1a0025c        hello-world         "/hello"            29 minutes ago      Exited (0) 29 minutes ago                        musing_lewin

Docker containers can be referenced either by their container ID (which is a hash), or the randomly generated name.

A stopped container can be 'saved' back into an image using the docker commit command. This is not recommended if your goal is to make containers that can be recreated from scratch (ie. strictly from Dockerfiles). There are two ways to go about this:

Manually commit the container ID (the hash), and then tag the image ID with an image name.
# docker commit <container id>
returns <image id>
# docker tag <image id> <image tag>
Or, commit the container using the randomly generated name and the image name.
# docker commit <container name> <image tag>
returns <image id>

Containers[edit]

Dealing with Stopped Containers[edit]

A container is stopped if the first processes in the container terminates (whether intentionally, segfaults, or if a stop/kill signal was sent to it). This is similar to how killing init on a Linux system would stop the system.

To stop a running container, use either the docker stop or docker kill commands.

A stopped container can also be started again using docker start. Interactive commands (like a bash shell) will require the -ai flags.

Troubleshooting a stopped container can be done by viewing any generated output generated by the container with the docker log command. While all text output generated by a container is logged by docker, it is still a good idea to rely on file logs for performance (docker log can be slow).

Networking[edit]

Docker provides networking to containers by providing something similar to a port forwarding between the host and the container. Inside port and outside port mappings are provided by the -p in:out/proto argument when starting a container.

The inside port is required. If no outside port is provided, docker will use the next available port on the host system. If no protocol is provided, docker will use TCP.

For example:

docker run --rm -ti -p 80:8080 web-server bash

A service listening on TCP port 80 inside the container will be accessible through TCP port 8080 on the host.

Side note: You can make docker containers listen only to the host by running something similar to docker run -p 127.0.0.1:1234:1234/tcp. Docker will forward host only traffic on tcp 1234 to the container.

When 'exposing' a particular port on a container, docker sets up the standard network model:

  1. Create a veth/eth interface pair
  2. Connect veth to the docker bridge (docker0)
  3. Add eth to the docker container namespace
  4. Assign an IP address on eth (within same network as docker0)
  5. Create any port forwarding necessary to 'expose' ports (eg. append to the DOCKER IPTable chain).

Linking[edit]

Docker provides a way for containers to 'link' to other containers on the same host. By linking one container to another, applications can talk to other applications in a separate container directly. When starting a container linked to another container, Docker will automatically update the /etc/hosts file and set up networking such that applications can talk to the other container directly without requiring traffic needing to route through the host.

Example:

On the 'server':
# docker run -ti --rm --name server ubuntu bash
root@85bdf83d4477:/# cat /etc/hosts
127.0.0.1       localhost
::1     localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.2      85bdf83d4477
On the 'client':
# docker run --rm -ti --link server --name client ubuntu bash
root@60e2d5ad2dbc:/# cat /etc/hosts
127.0.0.1       localhost
::1     localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.2      server 85bdf83d4477
172.17.0.3      60e2d5ad2dbc

The client can connect to 'server' directly by hostname.

This may be brittle, especially if containers are stopped/restarted anytime in the future separately.

Dynamic Linking[edit]

Containers can be linked together by creating a docker network. This is similar to a private network between two or more containers.

To create a network and use it:

# docker network create example
# docker run --rm -ti --net=example ubuntu bash

Unlike linking in the previous section, container names are looked up using the docker DNS server. Starting and restarting containers will not break lookups since the DNS will always have the most up to date information (verses the static /etc/hosts file that gets generated on startup). In the example above, none of the containers that start on the 'example' network will be in the container's hosts file, but the hostnames will still resolve.

Docker networks can be listed using docker network ls and its configuration as JSON using docker network inspect example.

Image Building with Dockerfiles[edit]

A Dockerfile defines the steps required to build a Docker image. Use the docker build command on a directory containing a Dockerfile to build a docker image to the local docker registry.

A dockerfile should contain at the very least:

# The base image to use (eg. the operating system, or service)
FROM baseimage:version

# Things to do on the base image. This can contain any number of RUN or COPY commands
# Each RUN or COPY command will generate a new docker image.
RUN yum install any-dependencies

# Expose any ports that are required
EXPOSE 80

# run the application. Use an ENTRYPOINT script for complex images
CMD ["python", "/some/app.py"]

Each step builds on top of the results from the previous step. A new overlay filesystem will be created for all steps using RUN, ADD, and COPY. All other instructions will only modify the container's metadata.

To build lightweight container images, it's best to keep the number of overlay layers as low as possible. High number of layers will result in a bloated image and possibly slower performance when an application needs to do many reads/writes on the container filesystem.

Reducing the number of layers can be done by combining as many steps together as possible. For example, multiple RUN commands can be combined into one by joining with &&. Multiple ADD or COPY can be done by moving all files into a separate directory so that the entire contents of the directory can be copied as one command.

The size of the image itself can be reduced by removing unnecessary files such as source code, caches, or temporary files.

When rebuilding an image, docker can make use of cached images to help speed up the process. When debugging or developing a new image, keep frequently changing operations near the end of the Dockerfile.

To build the Dockerfile in the current directory (.):

# docker build -t name-of-result .

After building, the resulting image will be on the local system (run docker images) and can be used immediately by referencing the image with its tag name.

Images can be pushed to a remote registry using docker push.

Docker Swarm[edit]

Docker Swarm provides cluster management and orchestration features embedded in and bundled with the Docker engine that allows multiple docker hosts to run containers. Swarm mode enables features such as docker secrets used for storing passwords or sensitive data required by a container.

The market share for container based orchestration has gone towards Kubernetes as literally everyone is now using it.

See Also: https://docs.docker.com/engine/swarm/

Setup[edit]

The following ports must be available between the docker hosts.

  • TCP port 2377 for cluster management communications
  • TCP and UDP port 7946 for communication among nodes
  • UDP port 4789 for overlay network traffic

One of the nodes will be the manager; it is the one which first creates the swarm. To create a swarm:

# docker swarm init [ --advertise-addr <MANAGER-IP>]

The optional value --advertise-addr should be used if the swarm is to have more than one node.

You can check the state of the swarm by checking on the docker engine info, and by listing nodes:

# docker info
...
Swarm: active
 NodeID: krjls0ie07ypkftaue2eo257c
 Is Manager: true
...

# docker node ls
ID                            HOSTNAME            STATUS              AVAILABILITY        MANAGER STATUS
krjls0ie07ypkftaue2eo257c *   docker01            Ready               Active              Leader

The * next to the node ID indicates that you’re currently connected on this node.

Joining a Swarm[edit]

Additional nodes can join using the docker swarm join command. The full command with the token can be obtained by running on the manager:

# docker swarm join-token worker
To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-5siymu2ijhm51gl34s13q5njpdpaxavsoll5zd88jqltk4rgt2-1kygf0i36tkrb3lfnmep2v72q 172.20.1.30:2377

Swarm Networking[edit]

A swarm node generates 2 traffic:

  • Docker Node Management & Control (encrypted, ports TCP&UDP/7946, UDP/4789)
  • Application traffic (from containers, external clients)

Network concepts in swarm services:

  • Overlay Networks - A networ
  • Ingress Network - Overlay network that facilitates load balancing among a service's nodes using a module called IPVS.
  • docker_gwbridge - Connects overlay networks (including ingress networks) to docker containers.

Deploy a Service[edit]

A service to a swarm is like a container to a docker host. A docker service runs within a docker swarm.

Services can be created via the command line (similar to docker run), or by using a docker-compose.yml file.

To start an image as a service:

# docker service create --replicas 1 --name helloworld alpine ping 127.0.0.1

A container in a service will restart if terminates abnormally.

To list all services:

# docker service ls
ID                  NAME                MODE                REPLICAS            IMAGE                  PORTS
aath3rehcloy        test-service        replicated          1/1                 alpine:latest

To inspect a service:

# docker service inspect --pretty test-service

ID:             aath3rehcloyz8x66xwsyexkw
Name:           test-service
Service Mode:   Replicated
 Replicas:      1
Placement:
UpdateConfig:
 Parallelism:   1
 On failure:    pause
 Monitoring Period: 5s
 Max failure ratio: 0
 Update order:      stop-first
RollbackConfig:
 Parallelism:   1
 On failure:    pause
 Monitoring Period: 5s
 Max failure ratio: 0
 Rollback order:    stop-first
ContainerSpec:
 Image:         alpine:latest@sha256:f006ecbb824d87947d0b51ab8488634bf69fe4094959d935c0c103f4820a417d
 Args:          ping mirror.cpsc.ucalgary.ca
Resources:
Endpoint Mode:  vip


Stacks[edit]

Applications typically will have more than one service working together. Docker helps manage these applications with discrete services with 'stacks'.

A stack defines a set of services that needs to be running within a swarm to maintain an application's state.

Use Docker Compose to define your application. See https://github.com/docker/labs/blob/master/beginner/chapters/votingapp.md

To deploy an application:

# docker stack deploy --compose-file docker-stack.yml vote

To verify the stack:

# docker stack services vote

To remove a stack

# docker stack rm vote


Docker Compose[edit]

Docker Compose is a program that simplifies the building of Docker applications by allowing you to define all the service components of the application in a single yaml file. Unlike Docker Swarm, it does not provide orchestration or any infrastructure level support around your application. It however does simplify defining services that are intended to run on a single-node docker node.

For instance, if a LAMP stack is required, a docker-compose configuration file would define each of the components making up the LAMP stack (Apache+PHP, MySQL, optionally a Load Balancer). Rather than manually creating/removing each of the Docker containers and networks manually, the entire service can be brought up or down with one command.

An example Docker Compose file (docker-compose.yml):

version: '3.3'

services:
  traefik:
    image: traefik:latest
    restart: always
    command: --web --docker --docker.watch --docker.exposedbydefault=false
    volumes:
      - /var/volumes/lamp/traefik/traefik.toml:/traefik.toml
      - /var/volumes/lamp/traefik/acme.json:/acme.json
      - /var/run/docker.sock:/var/run/docker.sock:ro
    networks:
      external_net:
        ipv4_address: 1.1.1.1
      backend:

  web:
    build: web/.
    restart: always
    volumes:
      - /var/volumes/lamp/data:/export/data
      - /var/volumes/lamp/config:/export/config
    networks:
      backend:
    ports:
      - "8080:8080"
    depends_on:
      - db
    labels:
      - "traefik.enable=true"
      - "traefik.port=8080"
      - "traefik.frontend.rule=Host:hello.steamr.com"

  db:
    image: mariadb:10
    restart: always
    volumes:
      - /var/volumes/lamp/db:/var/lib/mysql
    environment:
      - MYSQL_ROOT_PASSWORD=xxxx
      - MYSQL_DATABASE=web
      - MYSQL_USER=web
      - MYSQL_PASSWORD=xxxx
    networks:
      - backend
        aliases:
          - mysql

networks:
  external_net:
    external: true

  backend:
    ipam:
      config:
        - subnet: 192.168.246.0/24

To bring this LAMP service up, run:

# ls
docker-compose.yml

##  the '-d' is for detached mode
# docker-compose up -d

docker-compose will then read the configuration file and set up the Docker networks and bring up the Docker containers with the required configs (image, volumes, network settings, environment variables, etc.) as defined in the file. Networking between different containers within the same service are done using dynamic linking automatically (rather than static links with hosts files) since individual containers can be recreated without bringing the entire stack down.

The stack can be brought down using

# docker-compose down

Or reloaded if the docker-compose.yml file has been updated by running `docker-compose up` again:

# docker-compose up -d

Networking[edit]

Some notes on networking in docker-compose:

Ports[edit]

Expose ports. Either specify both ports (HOST:CONTAINER), or just the container port (a random host port will be chosen).

  • Ports mentioned in docker-compose.yml will be shared among different services started by the docker-compose.
  • Ports will be exposed to the host machine to a random port or a given port.

Example:

mysql:
  image: mysql:5.7
  ports:
    - "3306"

Port 3306 will be exposed on the host machine (and to the internet)

Expose[edit]

Expose ports without publishing them to the host machine - they’ll only be accessible to linked services. Only the internal port can be specified. Ports are not exposed to host machines, only exposed to other services.


mysql:
  image: mysql:5.7
  expose:
    - "3306"

Port 3306 will not be exposed to the host machine; it can only be 'linked' to other containers.


Networks[edit]

Provide a list of networks that the container should be connected to. In the example above, 'traefik' (the load balancer/proxy) is connected to the backend network (used to communicate to the web server), as well as the internet with a defined IP address.

If a private network is used (eg. the backend network in the example above), it is necessary to define the subnet that network is on by specifying the ipam subnet. Ensure that this subnet does not conflict with any other subnets that are running on the same docker host.


Other Notes[edit]

Tips & Tricks[edit]

  • To detach from a container, use the key combination Ctrl+P Ctrl+Q.
  • To run another program in an existing container, use the docker exec command with the same commands as run.
  • Don't fetch dependencies when containers start. Instead, make it part of the image.
  • Tags if not provided, it will default to 'latest'

Resource Constraints[edit]

# docker run --memory max-memory
# docker run --cpu-shares (relative to other containers)
# docker run --cpu-quota (limit in general)

Custom Unsecured Registry[edit]

You may run your own local registry (as a Docker container no less), but unless you make it secure with a signed certificate, you need to edit /etc/docker/daemon.json to contain your domain:

{
        "insecure-registries": ["registry.steamr.com:5000","registry:5000"]
}

Changing Docker IP Subnets[edit]

Docker will use the 172.17.0.0/16 for its internal networking. To run docker on networks that utilizes this network range, you will need to change docker's bridge IP address.

# ip a
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN
    link/ether 02:42:68:e8:2f:99 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 scope global docker0
       valid_lft forever preferred_lft forever

To change docker's bridge IP address, stop the docker service (systemctl stop docker and then edit /etc/docker/daemon.json (Create it if it doesn't exist) with the following:

{
  "bip": "192.168.252.1/22"
}

Restart the docker service using systemctl start docker.

Ensure that docker has updated the docker bridge IP and also the IPTable masquerade rules.

# iptables -t nat -L

You can also manually clear the docker rules:

# systemctl stop docker
# iptables -t nat -F POSTROUTING
# ip link set dev docker0 down
# ip addr del 172.17.0.1/16 dev docker0
# ip addr add 192.168.252.1/22 dev docker0

See Also: https://docs.docker.com/engine/userguide/networking/default_network/custom-docker0/

DNS within Containers[edit]

All containers will use the embedded docker DNS server at 127.0.0.11 which is provided by the docker engine. The embedded DNS server allows for service discovery for containers that are asliased by link or shares the same network (in the case of docker-compose).

When docker starts, it will look for a valid nameserver in the hosts's /etc/resolv.conf file. If none is found, it will automatically use the public 8.8.4.4 and 8.8.8.8 DNS servers which could potentially cause issues in your environment if for instance, outgoing DNS is blocked or if there is a distinction between internal and external DNS zones.

This can be configured with the /etc/docker/daemon.json configuration file by defining an entry for dns and then restarting docker.

{
        "dns": ["10.1.0.10"]
}

If you decide to set a container's DNS server with --dns=x.x.x.x, it will overwrite /etc/resolv.conf with that DNS server and will break service discovery.

See Also: https://docs.docker.com/v17.09/engine/userguide/networking/configure-dns/

docker_gwbridge[edit]

See: https://docs.docker.com/engine/swarm/networking/#customize-the-docker_gwbridge The docker_gwbridge is a virtual bridge that connects the overlay networks (including the ingress network) to an individual Docker daemon’s physical network. Docker creates it automatically when you initialize a swarm or join a Docker host to a swarm, but it is not a Docker device. It exists in the kernel of the Docker host. If you need to customize its settings, you must do so before joining the Docker host to the swarm, or after temporarily removing the host from the swarm.

# docker network inspect docker_gwbridge

Stop or disconnect all containers attached.

# docker network disconnect --force docker_gwbridge gateway_ingress-sbox

Then, remove and recreate the bridge:

# docker network rm docker_gwbridge
# docker network create \
--subnet 192.168.250.0/22 \
--opt com.docker.network.bridge.name=docker_gwbridge \
--opt com.docker.network.bridge.enable_icc=false \
--opt com.docker.network.bridge.enable_ip_masquerade=true \
docker_gwbridge


Mounting NFS in Docker[edit]

The docker-volume-netshare utility provides a convenient method of mounting persistent volumes to your docker container.

See https://github.com/ContainX/docker-volume-netshare

Can't build it on CentOS, get this error:

# go build github.com/ContainX/docker-volume-netshare
src/github.com/ContainX/docker-volume-netshare/netshare/drivers/ceph.go:44: undefined: volume.Response
src/github.com/ContainX/docker-volume-netshare/netshare/drivers/ceph.go:68: undefined: volume.Response
src/github.com/ContainX/docker-volume-netshare/netshare/drivers/cifs.go:76: undefined: volume.Response
src/github.com/ContainX/docker-volume-netshare/netshare/drivers/cifs.go:128: undefined: volume.Response
src/github.com/ContainX/docker-volume-netshare/netshare/drivers/driver.go:23: undefined: volume.Response
src/github.com/ContainX/docker-volume-netshare/netshare/drivers/driver.go:23: undefined: volume.Request
src/github.com/ContainX/docker-volume-netshare/netshare/drivers/driver.go:47: undefined: volume.Response
src/github.com/ContainX/docker-volume-netshare/netshare/drivers/driver.go:47: undefined: volume.Request
src/github.com/ContainX/docker-volume-netshare/netshare/drivers/driver.go:61: undefined: volume.Response
src/github.com/ContainX/docker-volume-netshare/netshare/drivers/driver.go:61: undefined: volume.Request
src/github.com/ContainX/docker-volume-netshare/netshare/drivers/driver.go:61: too many errors

Using binaries from https://github.com/ContainX/docker-volume-netshare/releases, then use this script to install it on the system:

#!/bin/bash

cp docker-volume-netshare /usr/bin/docker-volume-netshare
echo "DKV_NETSHARE_OPTS=nfs" > /etc/sysconfig/docker-volume-netshare
cat <<EOF > /usr/lib/systemd/system/docker-volume-netshare.service
[Unit]
Description=Docker NFS, AWS EFS & Samba/CIFS Volume Plugin
Documentation=https://github.com/gondor/docker-volume-netshare
After=nfs-utils.service
Before=docker.service
Requires=nfs-utils.service


[Service]
EnvironmentFile=/etc/sysconfig/docker-volume-netshare
ExecStart=/usr/bin/docker-volume-netshare \$DKV_NETSHARE_OPTS
StandardOutput=syslog

[Install]
WantedBy=multi-user.target
EOF

systemctl daemon-reload
systemctl enable docker-volume-netshare
systemctl start docker-volume-netshare

TFTP inside Docker[edit]

If you're going to run a TFTP server, you need ip_conntrack_tftp and ip_nat_tftp loaded. Otherwise, the firewall will just block everything:

20:31:02.362060 IP 10.1.3.3.2070 > 172.20.0.2.69:  28 RRQ "/pxelinux.0" octet tsize 0
20:31:02.363084 IP 172.20.0.2.49158 > 10.1.3.3.2070: UDP, length 14
20:31:02.363683 IP 10.1.3.3.1024 > 172.20.0.2.49158: UDP, length 17
20:31:02.363742 IP 172.20.0.2 > 10.1.3.3: ICMP 172.20.0.2 udp port 49158 unreachable, length 53
20:31:02.363856 IP 10.1.3.3.2071 > 172.20.0.2.69:  33 RRQ "/pxelinux.0" octet blksize 1456
20:31:02.364482 IP 172.20.0.2.49159 > 10.1.3.3.2071: UDP, length 15
20:31:02.365135 IP 10.1.3.3.1024 > 172.20.0.2.49159: UDP, length 4
20:31:02.365211 IP 172.20.0.2 > 10.1.3.3: ICMP 172.20.0.2 udp port 49159 unreachable, length 40
20:31:03.364295 IP 172.20.0.2.49158 > 10.1.3.3.2070: UDP, length 14
20:31:03.365624 IP 172.20.0.2.49159 > 10.1.3.3.2071: UDP, length 15
20:31:03.366342 IP 10.1.3.3.1024 > 172.20.0.2.49159: UDP, length 4
20:31:03.366405 IP 172.20.0.2 > 10.1.3.3: ICMP 172.20.0.2 udp port 49159 unreachable, length 40


Usage Reference[edit]

Command Description
docker images Lists docker images that you created or pulled using docker pull
docker pull Pulls a docker image from a registry (can be from docker.io or a private registry)
  • Version numbers can be given after a colon.
docker run Runs a command in a container. Eg. docker run alpine ls -l
  • For commands that require an interactive shell, pass in -it.
  • Pass in --name for a custom name instead of a long hash
  • Pass environment vars using -e
  • Pass -d to detach immediately
  • Pass -P to specify the image
docker exec Starts another process in the specified container. Eg: docker exec -ti container-name bash
docker ps Lists containers that are running.

Pass -a to see all containers that executed.

docker port shows the port mapping between the container and the host.
docker stop Stop a container by sending SIGTERM, then waiting for 10 seconds, then sending SIGKILL. Time can be changed with --time.
docker kill Stops a container by sending SIGKILL immediately. Signal can be set with --signal.
docker rm Removes a container. Use -f to stop and remove. Any volumes associated with the container remain unless the -v flag is given.

Cleaning Up[edit]

Pulled from: https://zaiste.net/posts/removing_docker_containers/

docker ps -aq -f status=exited List all exited containers
docker ps -aq --no-trunc | xargs docker rm Remove stopped containers. This command will not remove running containers, only an error message will be printed out for each of them.
docker images -q --filter dangling=true | xargs docker rmi Remove dangling/untagged images
docker ps --since a1bz3768ez7g -q | xargs docker rm Remove containers created after a specific container
docker ps --before a1bz3768ez7g -q | xargs docker rm Remove containers created before a specific container
Use --rm for docker build Use --rm together with docker build to remove intermediary images during the build process.

Questions & Answers[edit]

Some questions I had before I knew anything about Docker and answers afterwards.

How does one run a newer distro on an old host kernel that may have a different kerenl ABI or different sets of available syscalls?
For the most part, the kernel system calls are the same. There should be no problems running older distros since the kernel retains backwards compatibility. Newer distros may be an issue however.
What are the security implications of root inside the container? Can they break out of the container context?
This is one of the downsides to Docker as the engine requires root and could be a security issue if somehow the container can gain access to the docker engine. For the most part, this is safe if you do not mount the docker socket to the container or make the container have extra privileges to the host.
https://github.com/rootwyrm/lxc-media/tree/master/systemd_transmission
https://www.reddit.com/r/freebsd/comments/5vfj3w/docker_vs_jails/
How are updates to the base image done? Does it automatically update containers relying on it? Or will containers need to be rebuilt again?
Images are immutable. Updates to an image will have a different hash even if it has the same tag (eg. latest).
If an image has been updated, pull it with docker pull to obtain the most recent copy from the remote registry.
When a container using this image is restarted, it should use the newest image and any data not on persistent volumes will be lost.
How is data kept persistent? Does it need to be copied out/in every time it is to be rebuilt?
Persistent data should be saved outside of the container using persistent volumes. eg. -v outside/path:inside/path.
Persistent volumes can have multiple backings, whether it be saved on the host system or remotely with NFS.
NFS access from the container?
You must use something like docker-volume-netshare in order to have NFS volumes in the docker container. It does not seem like you can map an existing NFS volume to a docker volume.
Domain trust - is it the same as a regular VM?
If your application needs the operating system to be on the domain, you're probably doing it wrong. Ideally, your application should be using just LDAP without the need for a trust.
If you absolutely need domain trust in the container, keep in mind that you might need to rejoin when the container is rebuilt (say if a new image is available) or when it eventually loses its trust.

See Also[edit]

Nginx reverse proxy for HTTPS:


Enable Dark Mode!