Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

More Docker Guidelines #33

Merged
merged 7 commits into from
Sep 25, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
23 changes: 11 additions & 12 deletions docs/source/workflows/docker/beginner_guide_to_docker.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ generally find yourself running the same commands for creating containers and wa

**Docker Registry:** A registry or archive store is a place to store and retrieve docker images. This is one way to
share already-built docker images. LASP has a private repository, in the form of the
[LASP docker registry](./lasp_docker_registry.md).
[LASP docker registry](lasp_docker_registry).

So, you define a Docker *image* using a *Dockerfile* and/or a *Docker Compose* file. Running this image produces a
Docker *container*, which runs your code and environment. An image can be pushed up to a *registry*, where anyone with
Expand All @@ -59,9 +59,9 @@ the current directory for a file named `Dockerfile` by default, but you can spec
arguments or through your docker compose file.

Generally, each Docker image should be as small as possible. Each Dockerfile should only do one thing at a time. If you
have a need for two extremely similar docker containers, you can also use [Multi-stage builds](./multi_stage_builds.md).
You can orchestrate multiple docker containers that depend on each other using
[Docker compose](./docker_compose_examples.md).
have a need for two extremely similar docker containers, you can also use [Multi-stage builds](multi_stage_builds). You
can orchestrate multiple docker containers that depend on each other using
[Docker compose](docker_compose_examples).

To start, your Dockerfile should specify the base image using `FROM .`. Then, you can set up the environment by using
`RUN` commands to run shell commands. Finally, you can finish the container by using a `CMD` command. This is an
Expand All @@ -85,12 +85,11 @@ In the same folder, we run the `build` command to build our image:
docker build --platform linux/amd64 -f Dockerfile -t docker_tutorial:latest .
```

The flag `–platform linux/amd64` is optional unless you are [running an M1 chip mac](./running_docker_with_m1.md). The
`-f` flag indicates the name of the Dockerfile -- in this case, it is also optional, since `Dockerfile` is the default
value. The `-t` flag is a way to track the docker images and containers on our system by adding a name and a tag.
`latest` is the tag used to indicate the latest version of a Docker image. Additional useful flags include `--no-cache`
for a clean rebuild, and you can find a full list of flags
[here](https://docs.docker.com/reference/cli/docker/buildx/build/).
The flag `–platform linux/amd64` is optional unless you are [running an M1 chip mac](running_docker_with_m1). The `-f`
flag indicates the name of the Dockerfile -- in this case, it is also optional, since `Dockerfile` is the default value.
The `-t` flag is a way to track the docker images and containers on our system by adding a name and a tag. `latest` is
the tag used to indicate the latest version of a Docker image. Additional useful flags include `--no-cache` for a clean
rebuild, and you can find a full list of flags [here](https://docs.docker.com/reference/cli/docker/buildx/build/).

Now that we have built the image, we can see all the Docker images that are built on our system by running the
`docker images` command:
Expand Down Expand Up @@ -146,8 +145,8 @@ intervention work. For an example of a system where that's operating, you can re
in Docker](https://confluence.lasp.colorado.edu/display/DS/Containerize+TIM+Processing+-+Base+Image).

Next steps, beyond going more in depth with the TIM dockerfiles, would be to learn about using the [LASP docker
registry](./lasp_docker_registry.md). Other topics include [Docker compose](./docker_compose_examples.md), running
Docker on [M1 chips](./running_docker_with_m1.md), and other pages under the [Docker Guidelines](./index.rst).
registry](lasp_docker_registry). Other topics include [Docker compose](docker_compose_examples), running Docker on
[M1 chips](running_docker_with_m1), and other pages under the [Docker Guidelines](index).

## Docker Cheat Sheet

Expand Down
422 changes: 421 additions & 1 deletion docs/source/workflows/docker/containerizing_idl_with_docker.md

Large diffs are not rendered by default.

173 changes: 172 additions & 1 deletion docs/source/workflows/docker/lasp_docker_registry.md
Original file line number Diff line number Diff line change
@@ -1 +1,172 @@
# LASP Docker Registry
# LASP Docker Registry

## Purpose for this guideline

This document provides guidelines on how to use the LASP Docker registry for publishing and accessing Docker images.

## Overview

The Web Team manages an on-premises Docker registry exclusively used by LASP. The purpose of this registry is to enable
teams within LASP to publish and access Docker images. These Docker images can be created ad-hoc or in an automated
fashion using a Dockerfile located in a corresponding Bitbucket repository. Additionally, the registry can be made
available for access from the internet, behind WebIAM authentication to be used by cloud resources such as AWS.

The LASP Docker Registry is the Sonatype Nexus Repository Pro version. It runs in the DMZ and is behind WebIAM user
authentication.

## Accessing the Registry

The Web UI for Nexus is located at [https://artifacts.pdmz.lasp.colorado.edu](https://artifacts.pdmz.lasp.colorado.edu).
It is not necessary to log into the server to search and browse public repositories using the left-hand navigation menu.

> **Warning**: The UI is only accessible from inside the LASP Network.
The internal URL for the Docker repository when using Docker `push`/`pull` commands is
`docker-registry.pdmz.lasp.colorado.edu`. The same repository can also be accessed externally at
`lasp-registry.colorado.edu`.

The difference in URLs is that the Nexus server is intended to be used for different types of artifacts that can be
served and managed via HTTPS. The Docker registry is a special repository and is running on a different port and
protocol. It cannot be accessed via HTTPS.

The LASP Docker registry can be accessed outside the LASP Network using Docker CLI commands (i.e `docker push` or
`docker pull`) by the URL `lasp-registry.colorado.edu`. This allows users to access Docker images from AWS, for example.

## Namespaces

The LASP Docker registry is organized by Namespaces. This is just a sub-folder or path that is used to group all related
images together. Namespaces can be organized by teams or missions. Once a Namespace has been identified, ACLs will be
created in Nexus that allow only specific WebIAM groups to create or push images to the Registry as well as delete
images. Images will be referred to in Docker as `<namespace>/<image>:<tag>` or more precisely
`<registry>/<namespace>/<image>:<tag>`. See [Creating an Image](#creating-an-image) below for more information.

## Browsing Images

Access the Web UI via the URL above. Click on **Browse**:

![Browse for images](../../_static/lasp_docker_registry_browse1.png)

Click **"lasp-registry"**:

![Browse for images](../../_static/lasp_docker_registry_browse2.png)

Pick a team or project and expand it. Here you can see the available images under the "web" Namespace:

![Browse for images](../../_static/lasp_docker_registry_browse3.png)

> **Info**: Each Layer of a Docker image is composed of "Blobs". These are kept outside of the Namespace, but are
> referenced and used by the manifests.
You can find each available tag and its relevant metadata here.

## Creating an Image

### Manually

1. From the root directory where your Dockerfile lives, build a local image specifying an image name and tag (i.e.
`image_name:tag_version`):

```bash
$ docker build --force-rm -t webtcad-landing:1.0.2 .
Sending build context to Docker daemon 22.78 MB
Step 1 : FROM nginx:1.12.2
---> dfe062ee1dc8
...
...
...
Step 8 : RUN chown -R nginx:nginx /usr/share/nginx/html && chown root:root /etc/nginx/nginx.conf
---> Running in 8497cc7f30ed
---> 28cb8c0df12b
Removing intermediate container 8497cc7f30ed
Successfully built 28cb8c0df12b
```

2. Tag your new image with the format `<registry_URL>/<namespace>/image_name:tag`:

```bash
$ docker tag 28cb8c0df12b docker-registry.pdmz.lasp.colorado.edu/web/webtcad-landing:1.0.2
```

> **Info**: Note the "web" namespace in the URL above. This will change depending on your particular Namespace.
3. Login to the remote registry using your username/password:

```bash
$ docker login docker-registry.pdmz.lasp.colorado.edu
Username:
```

4. Push the image into the repository/registry:

```bash
$ docker push docker-registry.pdmz.lasp.colorado.edu/web/webtcad-landing:1.0.2
```

5. Logout of the registry when complete. This removes credentials stored in a local file.

```bash
$ docker logout
```

> **Info**: Don't forget to delete your local images if you no longer need them.

### Automated

To script the process of creating an image, you can use something like Ansible with its
["docker_image"](https://docs.ansible.com/ansible/latest/collections/community/docker/docker_image_module.html) module
or something simple as a build script in your Bitbucket repo or Jenkins Job with the above commands invoked in a Shell
Builder. The Web Team utilizes all three methods when creating images.

## Deleting

Although deleting a Docker image can be done via API commands against the registry, it is best done via the Web UI. To
do so, login to the Registry and then Browse to the particular image:tag you wish to delete:

![Delete an image](../../_static/lasp_docker_registry_delete_image1.png)

Click the "Delete Asset" button.

If you no longer need that particular image at all, you can delete the folder associated with it by selecting the folder
and clicking "Delete Folder":

![Delete an image](../../_static/lasp_docker_registry_delete_image2.png)


## Pulling an Image

1. Login to the remote registry using your username/password. Note that this is only necessary if you are accessing a
docker image that is NOT in a public namespace:

```bash
$ docker login docker-registry.pdmz.lasp.colorado.edu
Username:
```

2. Pull the image from the repository/registry:

```bash
$ docker image pull docker-registry.pdmz.lasp.colorado.edu/web/webtcad-landing:1.0.2
```

3. Logout of the registry when complete. This removes credentials stored in a local file:

```bash
$ docker logout
```

## Requesting Access

Write access and new Namespaces require The Web Team to create WebIAM groups and Nexus ACLs. Please submit a Jira
WEBSUPPORT ticket or send an email to `[email protected]`

## Acronyms

* **AWS** = Amazon Web Services
* **CLI** = Command-Line Interface
* **DMZ** = DeMiliterized Zone
* **HTTPS** = HypterText Transfer Protocol Secure
* **UI** = User Interface
* **URL** = Uniform Resource Locator

*Credit: Content taken from a Confluence guide written by Maxine Hartnett*
18 changes: 11 additions & 7 deletions docs/source/workflows/docker/multi_stage_builds.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,14 +9,14 @@ This documentation was inspired by the processes of containerizing
the [Jenkins TIM test suite](https://confluence.lasp.colorado.edu/display/DS/TIM+Containers+Archive), and providing
multiple ways of running the tests -- both through Intellij, and locally, plut the ability to add additional ways of
testing if they came up. Additionally, there are a few complications with the ways that [Red Hat VMs run on the Mac M1
chips](./running_docker_with_m1.md) that was introduced in 2020. All of these requirements led the TIM developers to use
chips](running_docker_with_m1) that was introduced in 2020. All of these requirements led the TIM developers to use
something that allows for a lot of flexibility and simplification in one Dockerfile: multi-stage builds and a docker
compose file. This guideline will go over the general use case of both technologies and the specific use case of using
them for local TIM tests.

## Multi-stage Build

### Multi-stage build overview
### Multi-stage Build Overview

A multi-stage build is a type of Dockerfile which allows you to split up steps into different stages. This is useful if
you want to have most of the Dockerfile be the same, but have specific ending steps be different for different
Expand All @@ -28,7 +28,7 @@ on Mac vs Linux, etc. You can also use multi-stage builds to start from differen
For additional info on multi-stage builds, you can check out the [Docker
documentation](https://docs.docker.com/build/building/multi-stage/).

### Creating and using a multi-stage build
### Creating and Using a Multi-stage Build

Creating a multi-stage build is simple. You can name the first stage, where you build off a base image, using `FROM
<image name> AS <stage name>`. After that, you can use it to build off of in later steps -- in this case, you can see
Expand All @@ -54,7 +54,7 @@ RUN g++ -o /binary source.cpp
To specify the stage, you can use the `--target <target name>` flag when building from the command line, or `target:
<target name>` when building from a docker compose file.

### TIM containerization multi-stage builds
### TIM Containerization Multi-stage Builds

In the TIM test container, multi-stage builds are used to differentiate between running the tests locally in a terminal
and running the commands through Intellij. In the local terminal, the code is copied in through an external shell
Expand All @@ -75,15 +75,15 @@ Jenkins will most likely use the same base target as local testing -- but for pr
production code into the container, this can be added as a separate target. Multi-stage builds allow us to put all
these use cases into one Dockerfile, while the docker compose file allows us to simplify using that end file.

## Docker compose
## Docker Compose

A [docker compose file](https://docs.docker.com/reference/compose-file/) is often used to create a network of different
docker containers representing different services. However, it is also a handy way of automatically creating volumes,
secrets, environment variables, and various other aspects of a Docker container. Basically, if you find yourself using
the same arguments in your `docker build` or `docker run` commands, that can often be automated into a docker compose
file.

### Docker compose overview
### Docker Compose Overview

Using a docker compose file allows you to simplify the commands needed to start and run docker containers. For example,
let's say you need to run a Docker container using a Dockerfile in a different directory with an attached volume and an
Expand Down Expand Up @@ -137,7 +137,7 @@ the docker compose file built.
Docker compose is a very powerful tool, and everything you can do when building on the command line can also be done in
docker compose. For more info, you can check out the [docker compose documentation](https://docs.docker.com/compose/).

### TIM `docker-compose.yml` file explained
### TIM `docker-compose.yml` File Explained

All the Docker functionality can easily be accessed using the docker compose file. Currently, this is set up for only a
few different use cases, but it can be updated if needed.
Expand Down Expand Up @@ -225,6 +225,10 @@ Here is what each piece of the service setting means:
* `environment`: This can be used to pass environment variables into the docker container. Currently, this is only used
for the `single_test` service, to set the test that you want to run.

## Useful Links
* [Official Docker documentation](https://docs.docker.com/)
* [Multi-stage Builds Documentation](https://docs.docker.com/build/building/multi-stage/)

## Acronyms

* **apk** = Alpine Package Keeper
Expand Down
4 changes: 2 additions & 2 deletions docs/source/workflows/docker/running_docker_with_m1.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ Red Hat has some [compatibility issues](https://access.redhat.com/discussions/59
method used for virtualization. This is because (according to the RHEL discussion boards) Red Hat is built for a default
page size of 64k, while the M1 CPU page size is 16k. One way to fix this is to rebuild for a different page size.

### Short-term solution
### Short-term Solution

A solution is to use the [`platform` option](https://devblogs.microsoft.com/premier-developer/mixing-windows-and-linux-containers-with-docker-compose/)
in [docker compose](https://docs.docker.com/compose/). It's an option that allows you to select the platform to build
Expand Down Expand Up @@ -62,7 +62,7 @@ The `platform` keyword is also available on `docker build` and `docker run` comm
`--platform linux/amd64` on `docker build` and `docker run` commands you can force the platform without needing to use
`docker compose`.

## Docker container hanging on M1
## Docker Container Hanging on M1

[This is a known issue with Docker](https://github.com/docker/for-mac/issues/5590). Docker has already released some
patches to try and fix it, but it could still be encountered. Basically, the Docker container will hang permanently
Expand Down
Loading