Skip to content

Containerise dependencies required to run tasks in dev workflow #53

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 17 commits into
base: main
Choose a base branch
from

Conversation

banjoh
Copy link
Member

@banjoh banjoh commented Apr 25, 2025

  • Add tasks to manage container dev environment
  • Update documentation to reflect new list of requirements

Copy link
Member

@chris-sanders chris-sanders left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I started to call out everywhere that there is 'docker' and then realized you've labeled that as 'to follow' so I stopped since that's still wip.

# GCP default configuration
GCP_PROJECT: '{{.GCP_PROJECT | default "replicated-qa"}}'
GCP_ZONE: '{{.GCP_ZONE | default "us-central1-a"}}'
VM_NAME: '{{.VM_NAME | default (printf "%s-dev" (or (env "GUSER") "user"))}}'

# Docker workflow configuration
IMAGE_NAME: ttl.sh/wg-easy-dev
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We really shouldn't be using ttl.sh here, it's fine for development but it needs to be available somewhere else for the actual workflow. Is this currently just for you to test and you intend to move it later?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is currently for test until we decide on

  • if we need to host this image. We need to weigh the need to host the image vs users needing to build the image which take a minute.
  • what OCI registry - docker hub? replicated? all of the above?

# GCP default configuration
GCP_PROJECT: '{{.GCP_PROJECT | default "replicated-qa"}}'
GCP_ZONE: '{{.GCP_ZONE | default "us-central1-a"}}'
VM_NAME: '{{.VM_NAME | default (printf "%s-dev" (or (env "GUSER") "user"))}}'

# Docker workflow configuration
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# Docker workflow configuration
# Workflow container configuration

Comment on lines 32 to 33
IMAGE_NAME: ttl.sh/wg-easy-dev
CONTAINER_NAME: wg-easy-dev
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In a helm chart you would expect to see:

repository: ttl.sh
Image: wg-easy-dev
tag: latest

Shouldn't we just use the same phrasing here? Maybe

DEV_CONTAINER_REPOSITORY
DEV_CONTAINER_IMAGE
DEV_CONTAINER_TAG

@@ -0,0 +1,68 @@
# Base image for all shared Dockerfiles for taskfiles
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why would this live in a subfolder "container" if it's the only thing in that sub-folder?
Do you expect there to be scripts that are added in the future for entrypoints and such that will live here too?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would like to add an entrypoint script for shell completions etc.

Comment on lines 20 to 21
gnupg \
sudo \
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are we using these in the container?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only yq and jq. I think the rest can be removed.

sudo \

# Install Helm
&& curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash \
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this all one big RUN line?
Doesn't this mean that you're forcing the cache to be invalidated if any of these things change rather than only updating the one change? Is there a benefit to it b/c I only see a downside of breaking normal caching behavior.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since we are not specifying any tool versions, would any of these tools change, leading to the layer being recreated? To update the tool versions, wouldn't it require docker build --no-cache anyway?

One, though negligible difference is the overall image size. Its smaller when using one layer. 1.23GB vs 1.18GB

Comment on lines 57 to 60
# Create a non-root user for better security
RUN groupadd -r devuser \
&& useradd -r -g devuser -m -s /bin/bash devuser \
&& echo "devuser ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/devuser
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How does a non-root user provide security when you give it global passwordless sudo? This is root by a different name.

If podman already maps the root UID/GID to the user running the commands is this necessary at all?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Elevated privileges are not required by any of the tasks. I'll clean that up

&& echo "devuser ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/devuser

# Set working directory
WORKDIR /app
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Personally I'd call this workspace or something. App is kind of an overloaded term, but I can take it or leave it I don't feel strongly about it.

- **yq:** A command-line YAML processor. ([Installation Guide](https://github.com/mikefarah/yq#install))
- **gcloud CLI:** Google Cloud command-line interface (optional, only required for GCP-specific tasks). ([Installation Guide](https://cloud.google.com/sdk/docs/install))
- **Standard Unix Utilities:** `find`, `xargs`, `grep`, `awk`, `wc`, `tr`, `cp`, `mv`, `rm`, `mkdir`, `echo`, `sleep`, `test`, `eval` (typically available by default on Linux and macOS).
- **Docker:** Container runtime for local development. ([Installation Guide](https://docs.docker.com/get-docker/))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

podman

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

made a PR against this PR branch:

-e USER=devuser \
-e REPLICATED_API_TOKEN={{ .REPLICATED_API_TOKEN }} \
-w /workspace \
{{.IMAGE_NAME}}:{{.IMAGE_TAG}} bash -c 'trap "exit" TERM; while :; do sleep 0.1; done')
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
{{.IMAGE_NAME}}:{{.IMAGE_TAG}} bash -c 'trap "exit" TERM; while :; do sleep 0.1; done')
{{.IMAGE_NAME}}:{{.IMAGE_TAG}} bash -c 'trap "exit 0" TERM INT; sleep infinity & wait')

There's no need to process something every .1 seconds just sleep forever, exit on the termination commands

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

& wait was the bit I needed. I couldn't get SIGTERM to work with sleep infinity when running in the container

@banjoh banjoh changed the title Dockerise dependencies required to run tasks in dev workflow Containerise dependencies required to run tasks in dev workflow Apr 28, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants