Skip to content

Update multi-service_container.md #22875

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
101 changes: 91 additions & 10 deletions content/manuals/engine/containers/multi-service_container.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,8 @@
- /config/containers/multi-service_container/
---

A container's main running process is the `ENTRYPOINT` and/or `CMD` at the
A container's main running process is the [`ENTRYPOINT`](https://docs.docker.com/reference/dockerfile/#entrypoint)
and/or [`CMD`](https://docs.docker.com/reference/dockerfile/#cmd) at the
end of the `Dockerfile`. It's best practice to separate areas of concern by
using one service per container. That service may fork into multiple
processes (for example, Apache web server starts multiple worker processes).
Expand All @@ -22,16 +23,17 @@

The container's main process is responsible for managing all processes that it
starts. In some cases, the main process isn't well-designed, and doesn't handle
"reaping" (stopping) child processes gracefully when the container exits. If
your process falls into this category, you can use the `--init` option when you
"reaping" (stopping) child processes gracefully when the container exits or signal forwarding.
If your process falls into this category, you can use the
[`--init` option](https://docs.docker.com/reference/cli/docker/container/run/#init) when you
run the container. The `--init` flag inserts a tiny init-process into the
container as the main process, and handles reaping of all processes when the
container exits. Handling such processes this way is superior to using a
full-fledged init process such as `sysvinit` or `systemd` to handle process
lifecycle within your container.

If you need to run more than one service within a container, you can achieve
this in a few different ways.
this in a few different ways. Bear in mind that this always comes with a trade-off.

## Use a wrapper script

Expand Down Expand Up @@ -66,6 +68,11 @@
CMD ./my_wrapper_script.sh
```

If you combine this approach with the `--init` flag mentioned above, you will get

Check warning on line 71 in content/manuals/engine/containers/multi-service_container.md

View workflow job for this annotation

GitHub Actions / vale

[vale] reported by reviewdog 🐶 [Docker.RecommendedWords] Consider using 'previous' instead of 'above' Raw Output: {"message": "[Docker.RecommendedWords] Consider using 'previous' instead of 'above'", "location": {"path": "content/manuals/engine/containers/multi-service_container.md", "range": {"start": {"line": 71, "column": 63}}}, "severity": "INFO"}
the benefits of reaping zombie processes but no signal forwarding. Subprocesses may
not terminate gracefully. If your application requires a gracefull shutdown, be aware

Check failure on line 73 in content/manuals/engine/containers/multi-service_container.md

View workflow job for this annotation

GitHub Actions / vale

[vale] reported by reviewdog 🐶 [Vale.Spelling] Did you really mean 'gracefull'? Raw Output: {"message": "[Vale.Spelling] Did you really mean 'gracefull'?", "location": {"path": "content/manuals/engine/containers/multi-service_container.md", "range": {"start": {"line": 73, "column": 58}}}, "severity": "ERROR"}
of this pitfall.

## Use Bash job controls

If you have one main process that needs to start first and stay running but you
Expand Down Expand Up @@ -104,14 +111,18 @@

## Use a process manager

Use a process manager like `supervisord`. This is more involved than the other
options, as it requires you to bundle `supervisord` and its configuration into
your image (or base your image on one that includes `supervisord`), along with
the different applications it manages. Then you start `supervisord`, which
This is more involved than the other options, as it requires you to
bundle the process manager binary and its configuration into your image
(or base your image on one that includes it), along with
the different applications it manages. Then you start the process manager as PID 1, which
manages your processes for you.

The following Dockerfile example shows this approach. The example assumes that
these files exist at the root of the build context:
As with process manager, you do not need the `--init` parameter.

### supervisord

Check failure on line 122 in content/manuals/engine/containers/multi-service_container.md

View workflow job for this annotation

GitHub Actions / vale

[vale] reported by reviewdog 🐶 [Vale.Spelling] Did you really mean 'supervisord'? Raw Output: {"message": "[Vale.Spelling] Did you really mean 'supervisord'?", "location": {"path": "content/manuals/engine/containers/multi-service_container.md", "range": {"start": {"line": 122, "column": 5}}}, "severity": "ERROR"}

The following Dockerfile example shows this approach for [`supervisord`](https://supervisord.org/).
The example assumes that these files exist at the root of the build context:

- `supervisord.conf`
- `my_first_process`
Expand Down Expand Up @@ -142,3 +153,73 @@
stdout_logfile_maxbytes=0
redirect_stderr=true
```

The obvious downside to this approach is that your container will run indefinetly

Check failure on line 157 in content/manuals/engine/containers/multi-service_container.md

View workflow job for this annotation

GitHub Actions / vale

[vale] reported by reviewdog 🐶 [Vale.Spelling] Did you really mean 'indefinetly'? Raw Output: {"message": "[Vale.Spelling] Did you really mean 'indefinetly'?", "location": {"path": "content/manuals/engine/containers/multi-service_container.md", "range": {"start": {"line": 157, "column": 71}}}, "severity": "ERROR"}
as `supervisord` is not designed to terminate when a supervised process terminates.
If you aim for a small image size of your container, this might be an issue to, as it
requires to have a full python runtime available.

Check failure on line 160 in content/manuals/engine/containers/multi-service_container.md

View workflow job for this annotation

GitHub Actions / vale

[vale] reported by reviewdog 🐶 [Vale.Terms] Use 'Python' instead of 'python'. Raw Output: {"message": "[Vale.Terms] Use 'Python' instead of 'python'.", "location": {"path": "content/manuals/engine/containers/multi-service_container.md", "range": {"start": {"line": 160, "column": 25}}}, "severity": "ERROR"}

### s6 and s6-overlay

Check warning on line 162 in content/manuals/engine/containers/multi-service_container.md

View workflow job for this annotation

GitHub Actions / vale

[vale] reported by reviewdog 🐶 [Docker.HeadingSentenceCase] Use sentence case for headings: 's6 and s6-overlay'. Raw Output: {"message": "[Docker.HeadingSentenceCase] Use sentence case for headings: 's6 and s6-overlay'.", "location": {"path": "content/manuals/engine/containers/multi-service_container.md", "range": {"start": {"line": 162, "column": 5}}}, "severity": "WARNING"}

[s6-overlay](https://github.com/just-containers/s6-overlay) is a layer that can be installed
on top of your container and uses the [s6](https://skarnet.org/software/s6/overview.html)
process manager.

To make you of it in your container, you have to pull it into your dockerfile first:

Check failure on line 168 in content/manuals/engine/containers/multi-service_container.md

View workflow job for this annotation

GitHub Actions / vale

[vale] reported by reviewdog 🐶 [Vale.Terms] Use 'Dockerfile' instead of 'dockerfile'. Raw Output: {"message": "[Vale.Terms] Use 'Dockerfile' instead of 'dockerfile'.", "location": {"path": "content/manuals/engine/containers/multi-service_container.md", "range": {"start": {"line": 168, "column": 68}}}, "severity": "ERROR"}

```dockerfile
# Use your favorite image
FROM ubuntu
ARG S6_OVERLAY_VERSION=3.2.1.0

RUN apt-get update && apt-get install -y nginx xz-utils
RUN echo "daemon off;" >> /etc/nginx/nginx.conf

ADD https://github.com/just-containers/s6-overlay/releases/download/v${S6_OVERLAY_VERSION}/s6-overlay-noarch.tar.xz /tmp
RUN tar -C / -Jxpf /tmp/s6-overlay-noarch.tar.xz
ADD https://github.com/just-containers/s6-overlay/releases/download/v${S6_OVERLAY_VERSION}/s6-overlay-x86_64.tar.xz /tmp
RUN tar -C / -Jxpf /tmp/s6-overlay-x86_64.tar.xz

ENTRYPOINT ["/init"]
```

Depending on your target cpu architecture you may need to pull in different releases.

Check failure on line 186 in content/manuals/engine/containers/multi-service_container.md

View workflow job for this annotation

GitHub Actions / vale

[vale] reported by reviewdog 🐶 [Vale.Spelling] Did you really mean 'cpu'? Raw Output: {"message": "[Vale.Spelling] Did you really mean 'cpu'?", "location": {"path": "content/manuals/engine/containers/multi-service_container.md", "range": {"start": {"line": 186, "column": 26}}}, "severity": "ERROR"}

For every service that should run in parallel you then create a shell script
that contains instructions to run or stop the process.

The following example will start an nginx server and a dummy process.

Check failure on line 191 in content/manuals/engine/containers/multi-service_container.md

View workflow job for this annotation

GitHub Actions / vale

[vale] reported by reviewdog 🐶 [Vale.Terms] Use 'Nginx' instead of 'nginx'. Raw Output: {"message": "[Vale.Terms] Use 'Nginx' instead of 'nginx'.", "location": {"path": "content/manuals/engine/containers/multi-service_container.md", "range": {"start": {"line": 191, "column": 37}}}, "severity": "ERROR"}

In `services/nginx/` create a file `run`

```sh
#!/usr/bin/with-contenv sh
echo >&2 "Starting: 'nginx'"
exec /usr/sbin/nginx̃
```

A file `finish` in the same directory

```sh
#!/usr/bin/env sh
echo >&2 "Exit(code=${1}): 'nginx'"
```

In `services/hello-world/` create a file `run`

```sh
#!/usr/bin/with-contenv sh
echo >&2 "Starting: 'hello-world'"
exec sleep 3600
```

A file `finish` in the same directory

```sh
#!/usr/bin/env sh
echo >&2 "Exit(code=${1}): 'hello-world'"
```

This example will behave the same way as the `supervisord` and not terminate,
if all supervised process are stopped. Using `exec s6-svscanctl -t /var/run/s6/services`
at the end of a `finish` script can enable this behavior.