-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[DRAFT] Terraform refactor / Go buildpack deploy #75
Draft
jadudm
wants to merge
46
commits into
main
Choose a base branch
from
jadudm/tf-0103
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
+1,533
−1,004
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This brings the TF back to functional. Interim checkin.
This is setting up for multiple envs.
Those now need to be turned into go buildpacks.
jadudm
changed the title
Terraform refactor / Go buildpack deploy
[DRAFT] Terraform refactor / Go buildpack deploy
Jan 5, 2025
Going to see if I can do this in a branch.
This will fail in multiple ways, because I have no secrects configured, etc. But, I'd like to see what the runner does just the same.
I can't see the action...
Apparently.
Lets avoid bash scripts.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
The previous Terraform had no modularity at all. It was one file, with no abstraction.
This introduces a new structure, and it should allow for (relatively) easy deployment to multiple spaces (e.g.
dev
,staging
, andproduction
). These notes will also be moved into a README in the TF directory.launching the stack
at the top of the tree will deploy the
dev
stack. More work needs to be done in order to store the TF state in S3, so that we can run this from Github Actions. For now, this is not complete; if different devs deploy, they will have to completely destroy (tear down) the state of the other devs. This will become... annoying... once we start storing data in buckets. (Buckets must be empty in order to be torn down.)So, the deploy to Cloud.gov is still a work-in-progress. But, it is possible, while testing/developing, to do a deploy from a local machine. Once we have GH Actions in place, we will never do a deploy from a local machine. We will always do our deploys from an action.
layout
At the top of the
terraform
directory are two files that matter:developers.tf
will become part of our onboarding. This file is where devs add themselves as an initial commit so that they gain access to the Cloud.gov environment. We will control access to Cgov through this file. (This wiring is not in place yet, but the file is there. The access controls have to be implemented as scripts executed in a Github Action that call the CF API on Cloud.gov.)Cgov deployments are organized into
organizations
andspaces
. An organization might begsa-tts-search
, and a space might bedev
,staging
, orproduction
.There are two directories (currently) that contain the Terraform deploy scripts:
dev
contains the variables and drivers for deploying to our (eventual)dev
space. Every service that we deploy will get a section in this file:I have not yet determined if this can be made reusable between spaces (meaning, avoiding the boilerplate-ness of this). Each service has to be wired up to the correct databases and S3 buckets in its space in order to execute. Further, we might want to allocate different amounts of RAM, disk, and instances to services in the different spaces. That is, we might one 1 instance of
fetch
in thedev
environment, but 3 instances offetch
inproduction
. Because we only have one pool of RAM for all of the spaces combined, we will probably run light in lower environments, and run a fuller stack inproduction
.The service itself is defined in
shared/services/<service-name>
. We apparently have to include the provider (?), define the variables for the module, the outputs, and the module itself. Put another way:providers.tf
is boilerplate. It will need to change when we switch to the officialcloudfoundry/cloudfoundry
provider.variables.tf
defines the variables that the service needs to have defined in order to execute. For example, when instantiating the module, we need to provide the amount of RAM, disk, and the number of instances the service will be created with.service.tf
defines the service itself.We can see the
fetch
service:All of the services get the entire codebase; this is because we then launch, on a per-instance basis, different code from
cmd
.Variables include the ID of the space we are deploying to (e.g. we do not deploy to
dev
, but to a UUID4 value representingdev
), the disk, memory, and instances, and more importantly, bindings to the databases and S3 buckets.buckets and databases
In
shared/cloudgov
are module definitions for our databases and S3 buckets.In
dev/main.tf
, we instantiate these as follows:For
dev
, we might only usemicro
instances. For production, however, we might instantiatexl
instances. This lets us configure the databases on a per-space basis. (S3 buckets are all the same, so there is no configuration.)This module has outputs. Once instantiated, we can refer to
module.databases
as amap(string)
and reference theid
of each of the databases (or buckets). In this way, we can pass the entire map of IDs to the services, and they can then bind to the correct databases/S3 buckets. Most (all?) services will want to bind to the queues databases; only some need to bind towork
, and some need to bind toserve
.