Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a command structure to Go driver #73

Merged
merged 5 commits into from
Dec 27, 2024
Merged

Add a command structure to Go driver #73

merged 5 commits into from
Dec 27, 2024

Conversation

zjrgov
Copy link
Contributor

@zjrgov zjrgov commented Dec 26, 2024

🎫 Addresses issue: https://github.com/GSA-TTS/devtools-program/issues/199

Foundation for calling different functions of the cf-driver executable

🛠 Summary of changes

  • Adds cobra module to help manage commands
  • Adds some "documentation" to the commands, mostly lifted from GitLab's custom runner docs and meant for updating as the actual commands are getting written.
  • Removes the stand-in stuff that in main.

📜 Testing Plan

How would a peer test this work?

  • go run main.go will execute the top-level command
  • It should tell you about drive, which is the main command to be fed to gitlab-runner.
  • It should run subcommands like go run main.go drive prepare
  • Each command & subcommand should return something and should work with the -h or --help flag.

👀 Screenshots and Evidence

Here's an example of what some of the output looks like

This is CloudFoundry Driver for the GitLab Custom executor.

The gitlab-runner service should run cfd with it's "drive" subcommands,
e.g., "cfd drive prepare".

Usage:
  cfd [command]

Available Commands:
  drive       Drive stages requested by gitlab-runner's executor
  help        Help about any command
  completion  Generate the autocompletion script for the specified shell

Flags:
  -h, --help   help for cfd

Use "cfd [command] --help" for more information about a command.

Also includes some long command descriptions that are mostly
lifted from GitLabs Custom executor documentation.
@zjrgov zjrgov changed the base branch from main to go/main December 26, 2024 17:22
@zjrgov zjrgov self-assigned this Dec 27, 2024
@zjrgov zjrgov requested a review from a team December 27, 2024 13:28
Copy link
Contributor

@rahearn rahearn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice framework, and I appreciate the docs being in-line and having links a ton

}

var rootCmd = &cobra.Command{
Use: "cfd",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[question] - the docs mention cfd but when I run make build I get a cf-driver executable. Should those be consistent? If something needs to change I do like the shorter version more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch, I haven't been using the actual executable much & didn't think of it.

}

var DriveCmd = &cobra.Command{
Use: "drive",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[question] is there a planned use for calling drive by itself ever? Feels like we could attach its subcommands directly to root.go instead

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My thinking was to segregate the gitlab-runner stage commands so that there was room in the future to add other stuff to the executable. I'm not sure that would necessarily happen—I was imagining some kind of administrative / configurative command could come in, a doctor or who knows. It could be that that never happens, but given that nobody is actually going to type these often, it seemed pretty low-cost?

I'm not super attached to the idea though.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That makes sense, and yeah, since these are basically never typed I'm not opposed to the extra characters.

Both prepare_exec and run_exec are successful.
cleanup_exec fails.

The user can set cleanup_exec_timeout if they want to set some kind of
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ooooh - we should replace the current CUSTOM_ENV_PRESERVE_* behavior with a long (1 hour?) timeout instead of just skipping

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mm yeah that would be good. An hour feels long to me, but I'm supposing it must not to you?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh I misunderstood you when I first read this and thought you were just talking about adding a timeout to make sure broken jobs get cleaned up.

How are you thinking this would work? We could put something like a long sleep in the run step? It does seem like an easy way to take care of at least half of #34.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ideally it'd be something like what circleCI does, which is 10 minutes + whatever time you're ssh'd in to the runner to do your debugging. 1 hour was assuming that we wouldn't be able to have a dynamic timeout like that.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How are you thinking this would work?

🤷🏻 this might also be a misinterpretation of the cleanup_exec_timeout docs that triggered this whole thread. I was thinking that we could set that config and gitlab would take care of not calling cleanup right away, but that's probably wrong.

In any case, it was a thought for 34 or similar, not this PR

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think I misunderstood it… I was thinking it was a timeout on how long it would wait to kill run, but it's a timeout on how long it will wait to kill cleanup. Every stage but run has one of the timeouts, and run's time is managed in GitLab's settings globally and per runner.

But I do think we could use it to do the debugging PRESERVE's better:

  • set a long timeout on cleanup, just to override a shorter default if there is one
  • set a sleep that's a bit shorter than the timeout
  • do the regular cleanup after the sleep

@zjrgov zjrgov merged commit bf2550c into go/main Dec 27, 2024
1 check passed
@zjrgov zjrgov deleted the go/callable branch December 27, 2024 15:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants