Docker (software): Difference between revisions
(51 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
Docker is a service for creating and managing linux containers. | |||
Containers are the application layer of an OS and whatever software you're trying to run. | |||
The container itself contains the code to be run along with the environment. | |||
Anything which needs state is mounted as a volume to the container. | |||
==Installation== | ==Installation== | ||
===Ubuntu=== | ===Ubuntu=== | ||
[https://docs.docker.com/install/linux/docker-ce/ubuntu/#install-docker-engine---community-1 Reference] | [https://docs.docker.com/install/linux/docker-ce/ubuntu/#install-docker-engine---community-1 Reference] | ||
{{hidden | Install Script | | |||
<syntaxhighlight lang="bash"> | <syntaxhighlight lang="bash"> | ||
# Uninstall old docker | # Uninstall old docker | ||
Line 32: | Line 37: | ||
sudo apt-get install docker-ce docker-ce-cli containerd.io | sudo apt-get install docker-ce docker-ce-cli containerd.io | ||
</syntaxhighlight> | </syntaxhighlight> | ||
}} | |||
===Windows=== | ===Windows=== | ||
Line 41: | Line 47: | ||
[https://docs.docker.com/get-started/ Get Started] | [https://docs.docker.com/get-started/ Get Started] | ||
==Usage== | ==Dockerfile== | ||
How to write a dockerfile | |||
==CLI Usage== | |||
===Images=== | ===Images=== | ||
For the most part, you don't need to worry about images as <code>docker run</code> and <code>docker-compose</code> will download and build images for you as needed. | |||
<syntaxhighlight lang="bash"> | <syntaxhighlight lang="bash"> | ||
# List images. | |||
docker image ls | |||
# Prune unused images. | |||
docker image prune -a | |||
# Copy image. | |||
docker tag $SOURCE $TARGET | |||
docker push $TARGET | |||
</syntaxhighlight> | </syntaxhighlight> | ||
;Notes | |||
* Pruning with <code>docker system prune</code> will also delete images. | |||
* Omitting <code>-a</code> will only prune dangling (untagged) images. | |||
===Containers=== | ===Containers=== | ||
Line 60: | Line 83: | ||
** To restrict listing to localhost use <code>-p 127.0.0.1:80:80</code> | ** To restrict listing to localhost use <code>-p 127.0.0.1:80:80</code> | ||
* <code>-it</code> to be interactive with a pseudo-tty | * <code>-it</code> to be interactive with a pseudo-tty | ||
===stats=== | |||
[https://docs.docker.com/engine/reference/commandline/stats/ docker stats] | |||
Returns information about container cpu usage, memory usage, network usage, and disk usage. | |||
==Networking== | |||
The default mode for networking is bridge. You should leave this for most of your containers. | |||
===bridge=== | |||
In bridge mode, the docker service acts as a NAT and gives each container a separate local IP along with the docker host. | |||
On linux, you can type <code>ip a</code> to see the ip address of the <code>docker0</code> network interface. | |||
On my server, it is <code>172.17.0.1/16</code>. | |||
To access services running on the host (such as MySQL or Postgres), you will need to make these services listen on this network interface and allow it through your firewall. I suggest using [https://github.com/qoomon/docker-host qoomon/docker-host] which can redirect network traffic to the host. | |||
When using docker-compose, services can access each other using their service name as the hostname. However, the port needs to be exposed in the compose file. | |||
===host=== | |||
In this mode, docker processes have full access to your network. This can cause port conflicts if you are not careful. | |||
Furthermore, your docker service will have full access to your localhost. I do not recommend using this mode for most things. | |||
==GPUs== | |||
See [https://docs.docker.com/config/containers/resource_constraints/#gpu docker guide] | |||
===Setup=== | |||
# Go to [https://nvidia.github.io/nvidia-container-runtime/ nvidia-container-runtime] and add the repo | |||
# Install <code>nvidia-container-runtime</code> | |||
===Run=== | |||
Add <code>--gpus all</code> to your docker run command. | |||
===compose=== | |||
See [https://github.com/docker/compose/issues/6691 issue]. | |||
<pre> | |||
deploy: | |||
resources: | |||
reservations: | |||
devices: | |||
- driver: nvidia | |||
count: 1 | |||
capabilities: [gpu] | |||
</pre> | |||
==Windows== | ==Windows== | ||
Line 66: | Line 130: | ||
[https://github.com/docker/toolbox/issues/673 Reference]<br> | [https://github.com/docker/toolbox/issues/673 Reference]<br> | ||
When mounting paths using git bash, you need to prepend a <code>/</code> to <code>$(pwd)</code><br> | When mounting paths using git bash, you need to prepend a <code>/</code> to <code>$(pwd)</code><br> | ||
==docker-compose== | |||
Docker compose allows you to specify multiple docker services into a single <code>docker-compose.yml</code> file and run them all together. | |||
You can also use it to setup docker commands instead of listing options in a shell script. | |||
<pre> | |||
# Create a folder for your service and cd into it | |||
# Make the docker-compose file. | |||
# Run (i.e. build, create, and start) | |||
docker-compose up -d | |||
# Stop | |||
docker-compose down | |||
# Upgrade | |||
docker-compose pull # Optional, reduces downtime | |||
docker-compose up --force-recreate --build -d | |||
docker image prune -f | |||
</pre> | |||
* Note that <code>docker-compose restart</code> will just restart existing containers. It will not recreate them. | |||
[https://docs.docker.com/compose/compose-file/compose-file-v3/ Compose file reference] | |||
===Compose file=== | |||
See [https://docs.docker.com/compose/compose-file/compose-file-v3/ Compose file specification]. | |||
Previously, the Compose file (<code>docker-compose.yml</code>) required a version. Version 2 and version 3 had different options and not all options from version 2 were available in version 3. However, as of docker-compose v1.27+, you should no longer specify a version and options from both versions are supported. | |||
{{hidden | Example docker-compose.yml | | |||
<syntaxhighlight lang="yaml"> | |||
services: | |||
web: | |||
image: registry.gitlab.davidl.me/dli7319/davidl_me:latest | |||
restart: unless-stopped | |||
</syntaxhighlight> | |||
}} | |||
==Accessing the Host== | |||
Sometimes you may have services running on the host which you want to access from a container.<br> | |||
See [https://github.com/qoomon/docker-host docker-host] for a container which can access the host.<br> | |||
Add the following to your docker compose to expose port 8201 to other containers: | |||
<syntaxhighlight lang="yaml"> | |||
docker-host: | |||
image: qoomon/docker-host | |||
cap_add: | |||
- NET_ADMIN | |||
- NET_RAW | |||
</syntaxhighlight> | |||
* You do not need to add <code>expose</code>. | |||
By default, networks are allocated with ip ranges: | |||
* 172.17.0.0/12 with size /16 | |||
* 192.168.0.0/16 with size /20 | |||
If you want this to be more consistent, you can change it as follows: | |||
Set the following in <code>/etc/docker/daemon.json</code>: | |||
<syntaxhighlight lang="json"> | |||
{ | |||
"default-address-pools":[ | |||
{"base":"172.16.0.0/12","size":24} | |||
] | |||
} | |||
</syntaxhighlight> | |||
Then restart your docker: <code>sudo systemctl restart docker</code> and prune networks <code>docker network prune</code>.<br> | |||
Next, in your firewall, allow connections to your localhost from 172.16.0.0/12. | |||
<pre> | |||
ufw allow from 172.16.0.0/12 to any comment "from_docker" | |||
</pre> | |||
==Registries== | |||
The official Docker registry is [https://hub.docker.com/ Docker Hub].<br> | |||
However, [https://www.docker.com/increase-rate-limits/ Docker Hug has rate limits] of 100 pulls per 6 hours.<br> | |||
Alternative public registries: | |||
* [https://gallery.ecr.aws/ AWS ECR Gallery] has a mirror for all official docker containers. | |||
==Caching== | |||
If you want your builds to be fast on CICD, you have to setup [https://docs.docker.com/build/cache/ caching]. | |||
In particular you should: | |||
* Enable [https://docs.docker.com/build/buildkit/ buildkit] by setting the environment variable <code>DOCKER_BUILDKIT=1</code> | |||
* Use <code>--cache-from</code> in your build. | |||
* Setup external caching, e.g. with <code>--build-arg BUILDKIT_INLINE_CACHE=1</code>. | |||
** Use <code>--arg BUILDKIT_INLINE_CACHE=1</code> if using <code>docker buildx build</code>. | |||
* For multistage builds, cache each stage in your container registry. | |||
;Resources | |||
* https://michalwojcik.com.pl/2021/01/21/using-cache-in-multi-stage-builds-in-gitlab-ci-docker/ | |||
* https://testdriven.io/blog/faster-ci-builds-with-docker-cache/#multi-stage-builds | |||
==Useful Services== | |||
* [https://containrrr.dev/watchtower/ https://containrrr.dev/watchtower/] is a tool which will automatically update your docker containers when new images are published. It also has an http endpoint to trigger checks manually e.g. from CI/CD. | |||
==My Images== | |||
I have a few custom container images below: | |||
* [https://github.com/dli7319/docker-anki-server ghcr.io/dli7319/docker-anki-server:main] | |||
* [https://github.com/dli7319/docker-nextcloud ghcr.io/dli7319/docker-nextcloud:main] | |||
* [https://github.com/dli7319/docker-mediawiki ghcr.io/dli7319/docker-mediawiki:main] | |||
==Resources== | ==Resources== | ||
* [https://www.youtube.com/watch?v=fqMOX6JJhGo freeCodeCamp.org Docker Tutorial for Beginners Video] | * [https://www.youtube.com/watch?v=fqMOX6JJhGo freeCodeCamp.org Docker Tutorial for Beginners Video] | ||
Latest revision as of 04:47, 14 August 2023
Docker is a service for creating and managing linux containers.
Containers are the application layer of an OS and whatever software you're trying to run.
The container itself contains the code to be run along with the environment.
Anything which needs state is mounted as a volume to the container.
Installation
Ubuntu
# Uninstall old docker
sudo apt-get remove docker docker-engine docker.io containerd runc
# Update repos
sudo apt update
# Install prerequisites
sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
software-properties-common
# Add official gpg key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
# Add docker repo
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
# Install
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io
Windows
- Upgrade Windows to 2004 or newer
- Install and enable WSL2
- Install Docker Desktop
Guides
Dockerfile
How to write a dockerfile
CLI Usage
Images
For the most part, you don't need to worry about images as docker run
and docker-compose
will download and build images for you as needed.
# List images.
docker image ls
# Prune unused images.
docker image prune -a
# Copy image.
docker tag $SOURCE $TARGET
docker push $TARGET
- Notes
- Pruning with
docker system prune
will also delete images. - Omitting
-a
will only prune dangling (untagged) images.
Containers
docker container ls
Run
docker run <container>
-p hostport:containerport
to do port forwarding- To restrict listing to localhost use
-p 127.0.0.1:80:80
- To restrict listing to localhost use
-it
to be interactive with a pseudo-tty
stats
docker stats Returns information about container cpu usage, memory usage, network usage, and disk usage.
Networking
The default mode for networking is bridge. You should leave this for most of your containers.
bridge
In bridge mode, the docker service acts as a NAT and gives each container a separate local IP along with the docker host.
On linux, you can type ip a
to see the ip address of the docker0
network interface.
On my server, it is 172.17.0.1/16
.
To access services running on the host (such as MySQL or Postgres), you will need to make these services listen on this network interface and allow it through your firewall. I suggest using qoomon/docker-host which can redirect network traffic to the host.
When using docker-compose, services can access each other using their service name as the hostname. However, the port needs to be exposed in the compose file.
host
In this mode, docker processes have full access to your network. This can cause port conflicts if you are not careful. Furthermore, your docker service will have full access to your localhost. I do not recommend using this mode for most things.
GPUs
See docker guide
Setup
- Go to nvidia-container-runtime and add the repo
- Install
nvidia-container-runtime
Run
Add --gpus all
to your docker run command.
compose
See issue.
deploy: resources: reservations: devices: - driver: nvidia count: 1 capabilities: [gpu]
Windows
Notes on using docker with windows
Git bash paths
Reference
When mounting paths using git bash, you need to prepend a /
to $(pwd)
docker-compose
Docker compose allows you to specify multiple docker services into a single docker-compose.yml
file and run them all together.
You can also use it to setup docker commands instead of listing options in a shell script.
# Create a folder for your service and cd into it # Make the docker-compose file. # Run (i.e. build, create, and start) docker-compose up -d # Stop docker-compose down # Upgrade docker-compose pull # Optional, reduces downtime docker-compose up --force-recreate --build -d docker image prune -f
- Note that
docker-compose restart
will just restart existing containers. It will not recreate them.
Compose file
See Compose file specification.
Previously, the Compose file (docker-compose.yml
) required a version. Version 2 and version 3 had different options and not all options from version 2 were available in version 3. However, as of docker-compose v1.27+, you should no longer specify a version and options from both versions are supported.
services:
web:
image: registry.gitlab.davidl.me/dli7319/davidl_me:latest
restart: unless-stopped
Accessing the Host
Sometimes you may have services running on the host which you want to access from a container.
See docker-host for a container which can access the host.
Add the following to your docker compose to expose port 8201 to other containers:
docker-host:
image: qoomon/docker-host
cap_add:
- NET_ADMIN
- NET_RAW
- You do not need to add
expose
.
By default, networks are allocated with ip ranges:
- 172.17.0.0/12 with size /16
- 192.168.0.0/16 with size /20
If you want this to be more consistent, you can change it as follows:
Set the following in /etc/docker/daemon.json
:
{
"default-address-pools":[
{"base":"172.16.0.0/12","size":24}
]
}
Then restart your docker: sudo systemctl restart docker
and prune networks docker network prune
.
Next, in your firewall, allow connections to your localhost from 172.16.0.0/12.
ufw allow from 172.16.0.0/12 to any comment "from_docker"
Registries
The official Docker registry is Docker Hub.
However, Docker Hug has rate limits of 100 pulls per 6 hours.
Alternative public registries:
- AWS ECR Gallery has a mirror for all official docker containers.
Caching
If you want your builds to be fast on CICD, you have to setup caching.
In particular you should:
- Enable buildkit by setting the environment variable
DOCKER_BUILDKIT=1
- Use
--cache-from
in your build. - Setup external caching, e.g. with
--build-arg BUILDKIT_INLINE_CACHE=1
.- Use
--arg BUILDKIT_INLINE_CACHE=1
if usingdocker buildx build
.
- Use
- For multistage builds, cache each stage in your container registry.
- Resources
- https://michalwojcik.com.pl/2021/01/21/using-cache-in-multi-stage-builds-in-gitlab-ci-docker/
- https://testdriven.io/blog/faster-ci-builds-with-docker-cache/#multi-stage-builds
Useful Services
- https://containrrr.dev/watchtower/ is a tool which will automatically update your docker containers when new images are published. It also has an http endpoint to trigger checks manually e.g. from CI/CD.
My Images
I have a few custom container images below:
- ghcr.io/dli7319/docker-anki-server:main
- ghcr.io/dli7319/docker-nextcloud:main
- ghcr.io/dli7319/docker-mediawiki:main