diff --git a/docs/source/index.rst b/docs/source/index.rst index 6593bea..b1cb31f 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -8,3 +8,4 @@ Welcome to the LASP Developer's Guide! licensing data_management/index + workflows/index diff --git a/docs/source/workflows/docker/beginner_guide_to_docker.md b/docs/source/workflows/docker/beginner_guide_to_docker.md new file mode 100644 index 0000000..f102df7 --- /dev/null +++ b/docs/source/workflows/docker/beginner_guide_to_docker.md @@ -0,0 +1,194 @@ +# A Beginner's Guide to Docker + +## Purpose for this guideline + +This guide is intended to provide an overview of what Docker is, how it's used, and the basics of running Docker +containers. It will not go in depth on creating a Docker image, or on the more nuanced aspects of using Docker. For a +more in-depth introduction, you can read through the official Docker docs. + +## A Beginner's Guide to Docker + +Docker is a tool for containerizing code. You can basically think of it as a lightweight virtual machine. Docker works +by defining an image which includes whatever you need to run your code. You start with a base image, which is a pre-made +Docker image, then install your dependencies on top. Python? Java? Fortran libraries? Almost anything you can install +into a normal computer, you can install into Docker. There are plenty of base images available. You can start with +something as basic as [Arch linux](https://hub.docker.com/_/archlinux), or as complicated as a +[Windows base image with Python already installed](https://hub.docker.com/r/microsoft/windows-cssc-python). + +Once you have created your Docker image, it can be uploaded to LASP's internal registry for other people or machines to +use. Every machine runs the Docker image in the same way. The same image can be used for local development, for running +tests in Jenkins or GitHub Actions, or for running production code in AWS Lambdas. It creates a standard environment, so +new developers can get started quickly, and so everyone can keep their local environments clean. Docker also makes it +possible to archive the entire environment, not just the code. Code is only useful as long as people can run it. +Finally, unlike many virtual machines, Docker is lightweight enough to be run only when needed, and updated frequently. + +## Basics of Docker + +If you've used Virtual Machines in the past, the basic uses of Docker will be familiar to you. A few terms are defined +below. For a more in-depth explanation, see the [official Docker overview](https://docs.docker.com/get-started/). + +**Docker Image:** The Docker image contains all the information needed to run the Docker container. This includes the +entire operating system, file system, and dependencies. + +**Docker Container:** A Docker container is a specific instance of a Docker image. A Docker container is used to run +commands within the environment defined by the Docker image. + +**Dockerfile:** The dockerfile is what defines a Docker image. It contains the commands for building a Docker image, +including things like the base image to use, the installation steps to run, creating needed directories, etc. + +**Docker Compose:** A Docker compose file is an optional file which defines how to run the Docker images. This can be +useful if you will be running multiple images in tandem, attaching volumes or networks to the containers, or just +generally find yourself running the same commands for creating containers and want to optimize that. + +**Docker Registry:** A registry or archive store is a place to store and retrieve docker images. This is one way to +share already-built docker images. LASP has a private repository, in the form of the +[LASP docker registry](./lasp_docker_registry.md). + +So, you define a Docker *image* using a *Dockerfile* and/or a *Docker Compose* file. Running this image produces a +Docker *container*, which runs your code and environment. An image can be pushed up to a *registry*, where anyone with +access can pull the image and run the container themselves without needing access to the Dockerfile. + + +## Getting Started + +This section will outline some basic commands and use cases for Docker. First, you need to +[install Docker](https://docs.docker.com/get-started/get-docker/) on your computer. Next, start by creating a +dockerfile. This example dockerfile will run an `alpine` image and install Python. Traditionally, dockerfiles are named +`Dockerfile`, although you can append to that if needed (eg, `dev.Dockerfile`). The `docker build` command will look in +the current directory for a file named `Dockerfile` by default, but you can specify a different file though command line +arguments or through your docker compose file. + +Generally, each Docker image should be as small as possible. Each Dockerfile should only do one thing at a time. If you +have a need for two extremely similar docker containers, you can also use [Multi-stage builds](./multi_stage_builds.md). +You can orchestrate multiple docker containers that depend on each other using +[Docker compose](./docker_compose_examples.md). + +To start, your Dockerfile should specify the base image using `FROM .`. Then, you can set up the environment by using +`RUN` commands to run shell commands. Finally, you can finish the container by using a `CMD` command. This is an +optional command that will run once the entire container is set up. + +Here is our example Dockerfile: + +```dockerfile +# Starting with alpine as our base image +FROM alpine + +# Install python +RUN apk add --update --no-cache python3 && ln -sf python3 /usr/bin/python +RUN python3 -m ensurepip +RUN pip3 install --no-cache --upgrade pip setuptools +``` + +In the same folder, we run the `build` command to build our image: + +```bash +docker build --platform linux/amd64 -f Dockerfile -t docker_tutorial:latest . +``` + +The flag `–platform linux/amd64` is optional unless you are [running an M1 chip mac](./running_docker_with_m1.md). The +`-f` flag indicates the name of the Dockerfile -- in this case, it is also optional, since `Dockerfile` is the default +value. The `-t` flag is a way to track the docker images and containers on our system by adding a name and a tag. +`latest` is the tag used to indicate the latest version of a Docker image. Additional useful flags include `--no-cache` +for a clean rebuild, and you can find a full list of flags +[here](https://docs.docker.com/reference/cli/docker/buildx/build/). + +Now that we have built the image, we can see all the Docker images that are built on our system by running the +`docker images` command: + +``` +$ docker images +REPOSITORY TAG IMAGE ID CREATED SIZE +docker_tutorial latest 71736be7c555 5 minutes ago 91.9MB +``` + +> **Info**: If you prefer to use a GUI, the Docker Desktop application can also be used to view, run, and delete docker +> images. + +If we wanted, we could now push that image up to a registry by using the `docker push` +[command](https://docs.docker.com/reference/cli/docker/image/push/). Alternatively, instead of building the image, you +could pull an existing image using the `docker pull` [command](https://docs.docker.com/reference/cli/docker/image/pull/). + +Now that we have an image locally, we can run a container from that image using the `docker run` command: + +```bash +docker run --platform linux/amd64 -it --name tutorial docker_tutorial:latest +``` + +Once again, the platform is optional, unless you are on an M1 mac. The `-it` flag opens an interactive `tty` session -- +basically so you can interact with the container via the command line. The ``--name`` flag gives the container a name. +Another key flag to know is `-d`, which runs the container in detached mode. This will let the container run in the +background without attaching to your terminal. You can see all currently running Docker containers with `docker ps`, and +all currently existing Docker containers with `docker ps -a` . + +Running the `docker run` command will start your container and connect to it, so you can interactively run commands. If +you run `which python` in this container, you should see that Python is successfully installed. You can use `^D` to +detach from the container and stop it. + +With that, you have successfully run the Docker container! This is a good way to debug and run code inside a container +for development purposes. If you want to have the Docker image automatically execute code when you run it, we can use +the `CMD` command. For example, this can be used to run tests or the main application for a lambda container. + +To do this, add a line with a `CMD` at the bottom of your `Dockerfile`: + +```dockerfile +CMD echo "Hello world" +``` + +Once you build the container, you can run it without the interactive session: + +```bash +docker run --platform linux/amd64 docker_tutorial:latest +``` + +This will run once, execute the command in `CMD` at the end, and then exit the container. You can see that the container +has successfully exited with `docker ps -a`. The `CMD` is how most Docker containers that run code without human +intervention work. For an example of a system where that's operating, you can read the documentation on the [TIM tests +in Docker](https://confluence.lasp.colorado.edu/display/DS/Containerize+TIM+Processing+-+Base+Image). + +Next steps, beyond going more in depth with the TIM dockerfiles, would be to learn about using the [LASP docker +registry](./lasp_docker_registry.md). Other topics include [Docker compose](./docker_compose_examples.md), running +Docker on [M1 chips](./running_docker_with_m1.md), and other pages under the [Docker Guidelines](./index.rst). + +## Docker Cheat Sheet + +Here is a list of Docker commands that might be useful to have as a shorthand: + +```bash +# build locally +docker build --platform linux/amd64 -f -t :latest . + +# Run in interactive mode +docker run --platform linux/amd64 -it --name :latest + +# Login to docker registry +docker login docker-registry.pdmz.lasp.colorado.edu + +# View docker images +docker images + +# View docker containers +docker ps -a + +# Remove stopped containers +docker container prune + +# Remove dangling images (run after container prune) +docker image prune +``` + +## Useful Links +* [Official Docker documentation](https://docs.docker.com/) +* [Installing Docker engine](https://docs.docker.com/engine/install/) +* [Installing Docker Desktop for Mac](https://docs.docker.com/desktop/install/mac-install/) +* [Docker CLI cheatsheet](https://docs.docker.com/get-started/docker_cheatsheet.pdf) + +## Acronyms + +* **apk** = Alpine Package Keeper +* **amd64** = 64-bit Advanced Micro Devices +* **AWS** = Amazon Web Services +* **pip** = Pip Installs Packages +* **ps** = Process Status +* **tty** = TeleTYpe (terminal) + +*Credit: Content taken from a Confluence guide written by Maxine Hartnett* \ No newline at end of file diff --git a/docs/source/workflows/docker/containerizing_idl_with_docker.md b/docs/source/workflows/docker/containerizing_idl_with_docker.md new file mode 100644 index 0000000..ebdead3 --- /dev/null +++ b/docs/source/workflows/docker/containerizing_idl_with_docker.md @@ -0,0 +1 @@ +# Containerizing IDL with Docker \ No newline at end of file diff --git a/docs/source/workflows/docker/docker_compose_examples.md b/docs/source/workflows/docker/docker_compose_examples.md new file mode 100644 index 0000000..4cfd1d9 --- /dev/null +++ b/docs/source/workflows/docker/docker_compose_examples.md @@ -0,0 +1 @@ +# Docker Compose Examples \ No newline at end of file diff --git a/docs/source/workflows/docker/export_display_with_docker.md b/docs/source/workflows/docker/export_display_with_docker.md new file mode 100644 index 0000000..71a1fbe --- /dev/null +++ b/docs/source/workflows/docker/export_display_with_docker.md @@ -0,0 +1 @@ +# Export Display with Docker \ No newline at end of file diff --git a/docs/source/workflows/docker/index.rst b/docs/source/workflows/docker/index.rst new file mode 100644 index 0000000..55ce061 --- /dev/null +++ b/docs/source/workflows/docker/index.rst @@ -0,0 +1,15 @@ +Docker +====== + +.. toctree:: + :maxdepth: 1 + + beginner_guide_to_docker + containerizing_idl_with_docker + docker_compose_examples + export_display_with_docker + jenkins_job_builder + lasp_docker_registry + lasp_image_registry + multi_stage_builds + running_docker_with_m1 \ No newline at end of file diff --git a/docs/source/workflows/docker/jenkins_job_builder.md b/docs/source/workflows/docker/jenkins_job_builder.md new file mode 100644 index 0000000..df11739 --- /dev/null +++ b/docs/source/workflows/docker/jenkins_job_builder.md @@ -0,0 +1 @@ +# Jenkins Job Builder \ No newline at end of file diff --git a/docs/source/workflows/docker/lasp_docker_registry.md b/docs/source/workflows/docker/lasp_docker_registry.md new file mode 100644 index 0000000..ab6cbf8 --- /dev/null +++ b/docs/source/workflows/docker/lasp_docker_registry.md @@ -0,0 +1 @@ +# LASP Docker Registry \ No newline at end of file diff --git a/docs/source/workflows/docker/lasp_image_registry.md b/docs/source/workflows/docker/lasp_image_registry.md new file mode 100644 index 0000000..964302e --- /dev/null +++ b/docs/source/workflows/docker/lasp_image_registry.md @@ -0,0 +1 @@ +# LASP Image Registry \ No newline at end of file diff --git a/docs/source/workflows/docker/multi_stage_builds.md b/docs/source/workflows/docker/multi_stage_builds.md new file mode 100644 index 0000000..71e2615 --- /dev/null +++ b/docs/source/workflows/docker/multi_stage_builds.md @@ -0,0 +1,235 @@ +# Docker Compose and Multi-stage Builds + +## Purpose for this guideline + +This guide provides an overview and some examples of how to create a multi-stage build in a Dockerfile, which allows a +user to split up build steps into different stages. + +This documentation was inspired by the processes of containerizing +the [Jenkins TIM test suite](https://confluence.lasp.colorado.edu/display/DS/TIM+Containers+Archive), and providing +multiple ways of running the tests -- both through Intellij, and locally, plut the ability to add additional ways of +testing if they came up. Additionally, there are a few complications with the ways that [Red Hat VMs run on the Mac M1 +chips](./running_docker_with_m1.md) that was introduced in 2020. All of these requirements led the TIM developers to use +something that allows for a lot of flexibility and simplification in one Dockerfile: multi-stage builds and a docker +compose file. This guideline will go over the general use case of both technologies and the specific use case of using +them for local TIM tests. + +## Multi-stage Build + +### Multi-stage build overview + +A multi-stage build is a type of Dockerfile which allows you to split up steps into different stages. This is useful if +you want to have most of the Dockerfile be the same, but have specific ending steps be different for different +environments or use cases. Rather than copying the commands into different Dockerfiles, you can have multiple stages. +In this case, we want the Dockerfile to do different steps if it's running locally in the terminal, if it's running in +Jenkins, or if it's running through Intellij. This can also be used for development vs production environments, running +on Mac vs Linux, etc. You can also use multi-stage builds to start from different base images. + +For additional info on multi-stage builds, you can check out the [Docker +documentation](https://docs.docker.com/build/building/multi-stage/). + +### Creating and using a multi-stage build + +Creating a multi-stage build is simple. You can name the first stage, where you build off a base image, using `FROM + AS `. After that, you can use it to build off of in later steps -- in this case, you can see +that the first stage is named `builder` and the second stage builds off of `builder` in a new stage named `build1`. This +means if you run the stage `build1`, the Docker container will first do all the steps in the `builder` stage, and then +execute all the steps in `build1`: + +```dockerfile +# FROM https://docs.docker.com/develop/develop-images/multistage-build/#use-an-external-image-as-a-stage +# syntax=docker/dockerfile:1 +FROM alpine:latest AS builder +RUN apk --no-cache add build-base + +FROM builder AS build1 +COPY source1.cpp source.cpp +RUN g++ -o /binary source.cpp + +FROM builder AS build2 +COPY source2.cpp source.cpp +RUN g++ -o /binary source.cpp +``` + +To specify the stage, you can use the `--target ` flag when building from the command line, or `target: +` when building from a docker compose file. + +### TIM containerization multi-stage builds + +In the TIM test container, multi-stage builds are used to differentiate between running the tests locally in a terminal +and running the commands through Intellij. In the local terminal, the code is copied in through an external shell +script, and then the `ant` build steps are run through `docker exec` commands. This is more complicated to learn how to +use, but ultimately allows for more flexibility and the ability to re-run the tests without needing to rebuild the +container. This also lets the end user run the tests multiple times without stopping and starting the container. +However, this means the container isn't automatically turned off. This use case is more for people who have some +experience using Docker and want more flexibility. Therefore, it uses the most basic "base" target, which doesn't build +any TIM processing code into the image and provides only the basic setup for using the container. + +The case for running in Intellij is slightly different. This was intended to work similarly to the way junit tests run +in Intellij -- so the user hits a button, the tests run, and then everything is cleaned up afterwards. This resulted in +a second stage which would run the `ant` tests and generate a test report, before removing the container. This stage can +also be run from the command line, but since the test results need to be copied out, the container still has to be +manually stopped. + +Jenkins will most likely use the same base target as local testing -- but for production, if we decide to embed the +production code into the container, this can be added as a separate target. Multi-stage builds allow us to put all +these use cases into one Dockerfile, while the docker compose file allows us to simplify using that end file. + +## Docker compose + +A [docker compose file](https://docs.docker.com/reference/compose-file/) is often used to create a network of different +docker containers representing different services. However, it is also a handy way of automatically creating volumes, +secrets, environment variables, and various other aspects of a Docker container. Basically, if you find yourself using +the same arguments in your `docker build` or `docker run` commands, that can often be automated into a docker compose +file. + +### Docker compose overview + +Using a docker compose file allows you to simplify the commands needed to start and run docker containers. For example, +let's say you need to run a Docker container using a Dockerfile in a different directory with an attached volume and an +environment variable. Maybe you also want to name the container `example`. The command to build and start that docker +container would normally be: + +```bash +$ docker build -f dockerfiles/Dockerfile -t example:latest +$ docker run -d \ + --name example \ + -v myvol2:/app \ + -e ENV_VAR="test var" + example:latest +``` + +This can easily grow in complexity as more volumes are needed, secrets passed in, etc. However, if most of these +settings don't change between runs, then we can instead move them into a docker compose file. This file is named +`docker-compose.yml` and for this example would look something like this: + +```yaml +version: "3.9" + services: + base: + build: + dockerfile: dockerfiles/Dockerfile + tty: true + container_name: example + volumes: + - myvol2:/app + environment: + ENV_VAR: "test var" +``` + +If you wanted to tag this image, you can find more info on doing that +[here](https://docs.docker.com/reference/compose-file/build/#consistency-with-image). + +With this docker compose file, you can build and run the Dockerfile with the following command: + +```bash +$ docker compose up -d +``` + +You can run multiple Dockerfiles out of one docker compose file by adding additional services. You can run specific +services by adding onto the docker compose file. + +Docker compose will use existing containers or images if they exist, and cache any info that's downloaded from the +internet when rebuilding the images. If you want to stop the docker container without removing the image, you can use +`docker compose down`. If you do want to remove the images, `docker compose down --rmi all` will remove all images that +the docker compose file built. + +Docker compose is a very powerful tool, and everything you can do when building on the command line can also be done in +docker compose. For more info, you can check out the [docker compose documentation](https://docs.docker.com/compose/). + +### TIM `docker-compose.yml` file explained + +All the Docker functionality can easily be accessed using the docker compose file. Currently, this is set up for only a +few different use cases, but it can be updated if needed. + +```yaml +# As of 6/6/22 +version: "3.8" +services: + base: + # Create a container without copying in tim_processing or running any commands + # The platform setting is needed to run on Mac M1 chip, comment out if you're on a different type of machine. + platform: linux/x86_64 + build: + context: ../../../ + dockerfile: ./tim_processing/docker/Dockerfiles/Dockerfile + target: base + container_name: tim_processing_base + tty: true + + intellij: + # Create a container with the local tim_processing copied in and run docker/scripts/antbuild.sh + platform: linux/x86_64 + build: + context: ../../../ + dockerfile: ./tim_processing/docker/Dockerfiles/Dockerfile + target: intellij + container_name: tim_processing_intellij + tty: true + + test_tsis: + # Create the same image as intellij and run ant tim_processing_tsis_tests + platform: linux/x86_64 + build: + context: ../../../ + dockerfile: ./tim_processing/docker/Dockerfiles/Dockerfile + target: test_tsis + container_name: tim_processing_test_tsis + tty: true + + report_tsis: + # Create the same image as intellij, run tim_processing_tsis_tests, and generate a test report. + platform: linux/x86_64 + build: + context: ../../../ + dockerfile: ./tim_processing/docker/Dockerfiles/Dockerfile + target: test_report_tsis + container_name: tim_processing_report_tsis + tty: true + + single_test: + # Create the same image as intellij, run a single test as determined by the SINGLE_TEST_TIM environment variable, + # and generate a test report. + platform: linux/x86_64 + build: + context: ../../../ + dockerfile: ./tim_processing/docker/Dockerfiles/Dockerfile + target: single_test_report + environment: + SINGLE_TEST_TIM: ${SINGLE_TEST_TIM} + container_name: tim_processing_single_test + tty: true +``` + +Each service is designed for a slightly different use case. The most basic one, `base`, just creates and runs the +Docker container with the necessary data files and tools to run the `tim_processing` tests. This is generally the one +used for working in the terminal. The others all run different stages of the build, and are used in Intellij to create +simple configurations that can accomplish different goals. + +Here is what each piece of the service setting means: + +* `base` | `test_tsis` | `intellij`: this is the name of the service. You can designate what service to use by passing + it to the docker compose command: `docker compose up base`. +* `platform`: This is required for Mac M1 chips, but not for other machines. This is due to issues running Red Hat on M1 + chips. +* `build`: This block contains the build info. It is somewhat optional, it can be replaced with an image block if you + decide to move to images that are mostly stored in the registry rather than built locally. +* `context`: This sets the build context, basically where the docker compose is running from. In this case, it's set to + above the `tim_processing` parent directory so the entire codebase can be copied in. +* `dockerfile`: Since the context is set to one above `tim_processing`, we need to specify where the dockerfile is, even + though the Dockerfile and the `docker-compose.yml` file are in the same directory in the repo. +* `target`: This designates the target for the multi-stage build. This is the main difference between the different + services. +* `container_name`: The name of the container. +* `tty: true`: This line allows the created docker container to run in detached mode. +* `environment`: This can be used to pass environment variables into the docker container. Currently, this is only used + for the `single_test` service, to set the test that you want to run. + +## Acronyms + +* **apk** = Alpine Package Keeper +* **TIM** = Total Irradiance Monitor +* **tty** = TeleTYpe (terminal) +* **VM** = Virtual Machine + +*Credit: Content taken from a Confluence guide written by Maxine Hartnett* diff --git a/docs/source/workflows/docker/running_docker_with_m1.md b/docs/source/workflows/docker/running_docker_with_m1.md new file mode 100644 index 0000000..5dfdd11 --- /dev/null +++ b/docs/source/workflows/docker/running_docker_with_m1.md @@ -0,0 +1,85 @@ +# Running Docker on Mac M1 Chips + +## Purpose for this guideline + +This document provides guidelines on how to use Docker with Apple computers that use M1 chips. + +## Background + +Starting in 2020, Apple started using a [new design of chip](https://www.macrumors.com/guide/m1/) in their computers. +This chip, called the M1 chip, replaced the Intel x86 architecture of previous Mac computers with a new ARM-based system +designed at Apple. This allowed Apple to move away from Intel chips and start producing their own. Unfortunately, this +new style of chip introduces plenty of incompatibility problems, since it's based on an ARM architecture, whereas most +computers are still on x86. ARM is still primarily used in mobile devices. It is generally possible to run x86 programs +on ARM, but [performance takes a hit](https://www.toptal.com/apple/apple-m1-processor-compatibility-overview). + +Unfortunately, Docker is one program that depends heavily on the underlying architecture. Most software is going to or +has already added support for M1 chips, since Apple is such a large portion of the market, but there will be growing +pains, since this is a pretty big departure from existing chips. + +This guide will cover some of the issues that may come up when running Docker on an M1 chip computer. At the time of +writing, they all have various workarounds, but may have performance costs. + +Here are some of the [known issues surrounding running Docker on M1 +Macs](https://docs.docker.com/desktop/install/mac-install/#known-issues). + +## Running RHEL 7 on Docker + +Red Hat has some [compatibility issues](https://access.redhat.com/discussions/5966451) with the M1 chip no matter the +method used for virtualization. This is because (according to the RHEL discussion boards) Red Hat is built for a default +page size of 64k, while the M1 CPU page size is 16k. One way to fix this is to rebuild for a different page size. + +### Short-term solution + +A solution is to use the [`platform` option](https://devblogs.microsoft.com/premier-developer/mixing-windows-and-linux-containers-with-docker-compose/) +in [docker compose](https://docs.docker.com/compose/). It's an option that allows you to select the platform to build +the image. If you set the platform to `linux/x86_64` then the page size issue isn't a problem and the Docker image will +build locally. It is not clear what exactly it's doing - perhaps it is that the M1 chip is using some sort of +virtualization scheme on the x86 version of the docker image that makes it work - but it's a simple fix that works well. + +This does require an up-to-date version of Docker, as the `platform` keyword is only available for a [few +versions](https://github.com/docker/compose/pull/5985). The newer versions of Docker also include some fixes for the M1 +chips in general. + +Below is an example of a `docker-compose` file. To run this, you simply run `docker compose up` and it will run the +provided Dockerfile. You can add a `-d` flag to run the container in detached mode. To shut the container down, run +`docker compose down`. It will cache a bunch of info, so if you want it to rebuild the image, you can run `docker compose +down --rmi all` to remove the images: + +```yaml +version: "3.8" +services: +base: + # The platform setting is needed to run on Mac M1 chip, comment out if you're on a different type of machine. + platform: linux/x86_64 + build: + dockerfile: ./Dockerfile + container_name: container_name + tty: true +``` + +The `platform` keyword is also available on `docker build` and `docker run` commands. If you use +`--platform linux/amd64` on `docker build` and `docker run` commands you can force the platform without needing to use +`docker compose`. + +## Docker container hanging on M1 + +[This is a known issue with Docker](https://github.com/docker/for-mac/issues/5590). Docker has already released some +patches to try and fix it, but it could still be encountered. Basically, the Docker container will hang permanently +while running. Sometimes running `docker ps` will work, sometimes not. In order to recover from that point, restart the +Docker daemon. If you're using Docker Desktop, you can restart it that way, although it may require to force kill any +Docker programs. + +The best way to fix it is to go to the Docker Desktop Dashboard > Settings > Resources and change the number of CPUs +down to 1. This will obviously impact performance in other ways, but may help avoid encountering this permanent hang. + + +## Acronyms + +* **amd64** = 64-bit Advanced Micro Devices +* **ARM** = Advanced RISC (Reduced Instruction Set Computer) Machines +* **CPU** = Central Processing Unit +* **ps** = Process Status +* **RHEL** = Red Hat Enterprise Linux + +*Credit: Content taken from a Confluence guide written by Maxine Hartnett* diff --git a/docs/source/workflows/index.rst b/docs/source/workflows/index.rst new file mode 100644 index 0000000..93c6a20 --- /dev/null +++ b/docs/source/workflows/index.rst @@ -0,0 +1,8 @@ +Workflows +========= + +.. toctree:: + :maxdepth: 1 + + docker/index +