Skip to content

Commit

Permalink
added ansible-bootcamp envtype
Browse files Browse the repository at this point in the history
  • Loading branch information
sborenst committed May 16, 2017
1 parent 5d35160 commit 25e039e
Show file tree
Hide file tree
Showing 12 changed files with 1,161 additions and 0 deletions.
190 changes: 190 additions & 0 deletions ansible/configs/ansible-bootcamp/How.To.Create.Env.Type.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,190 @@
= How to create an Environment Type

== Create a base for your new environment type

* Duplicate the "generic-example" environemnt type directory or use another
environment type directory that is closer to your end goal.

== Edit your cloud provider "blueprint" or "template"

NOTE: At this point this is "aws" based, with time we will have other providers.

* Edit the link:./files/cloud_providers/ec2_cloud_template.j2[./files/cloud_provides/ec2_cloud_template.j2]

* Add Security Groups if you require any.
* Add LaunchConfigs and AutoScale Groups

----
"HostLC": {
"Type": "AWS::AutoScaling::LaunchConfiguration",
"Properties": {
"AssociatePublicIpAddress": true,
"ImageId": {
"Fn::FindInMap": [
"RegionMapping",
{
"Ref": "AWS::Region"
},
"AMI"
]
},
"InstanceType": "{{host_instance_type}}",
"KeyName": "{{key_name}}",
"SecurityGroups": [
{
"Ref": "HostSG"
}
],
"BlockDeviceMappings": [
{
"DeviceName": "/dev/xvda",
"Ebs": {
"VolumeSize": 30
}
},
{
"DeviceName": "/dev/xvdb",
"Ebs": {
"VolumeSize": 100
}
}
]
}
},
"HostAsg": {
"Type": "AWS::AutoScaling::AutoScalingGroup",
"Properties": {
"DesiredCapacity": {{host_instance_count}},
"LaunchConfigurationName": {
"Ref": "HostLC"
},
"MaxSize": 100,
"MinSize": 1,
"Tags": [
{
"Key": "Name",
"Value": "host",
"PropagateAtLaunch": true
},
{
"Key": "AnsibleGroup",
"Value": "hosts",
"PropagateAtLaunch": true
},
{
"Key": "Project",
"Value": "{{project_tag}}",
"PropagateAtLaunch": true
},
{
"Key": "{{ project_tag }}",
"Value": "host",
"PropagateAtLaunch": true
}
],
"VPCZoneIdentifier": [
{
"Ref": "PublicSubnet"
}
]
}
},
----

** Pay attention to the Tags created for the different AS groups

----
{
"Key": "Project",
"Value": "{{project_tag}}",
"PropagateAtLaunch": true
},
{
"Key": "{{ project_tag }}",
"Value": "host",
"PropagateAtLaunch": true
}
----


* Add DNS Entries you need for your environment:
----
"MasterDNS": {
"Type": "AWS::Route53::RecordSetGroup",
"DependsOn": "Master",
"Properties": {
"HostedZoneId": "{{HostedZoneId}}",
"RecordSets": [
{
"Name": "{{master_public_dns}}",
"Type": "A",
"TTL": "10",
"ResourceRecords": [
{
"Fn::GetAtt": [
"Master",
"PublicIp"
]
}
]
}
]
}
},
----

* Add S3 or other resources you require:
----
"RegistryS3": {
"Type": "AWS::S3::Bucket",
"Properties": {
"BucketName": "{{ env_type }}-{{ guid }}",
"Tags": [
{
"Key": "Name",
"Value": "s3-{{ env_type }}-{{ guid }}"
},
{
"Key": "Project",
"Value": "{{project_tag}}"
}
]
}
}
},
----

* Add any "outputs" you need from the cloud provider:
----
"RegistryS3Output": {
"Description": "The ID of the S3 Bucket",
"Value": {
"Ref": "RegistryS3"
}},
----



== Internal DNS file


* Edit the internal dns template: link:./files/ec2_internal_dns.json.j2[./files/ec2_internal_dns.json.j2]
** You can create nicely indexed internal hostname by creating a for loop in the file for each host group
----
{% for host in groups[('tag_' + env_type + '-' + guid + '_support') | replace('-', '_') ] %}
{
"Action": "{{DNS_action}}",
"ResourceRecordSet": {
"Name": "support{{loop.index}}.{{zone_internal_dns}}",
"Type": "A",
"TTL": 20,
"ResourceRecords": [ { "Value": "{{hostvars[host]['ec2_private_ip_address']}}" } ]
}
},
----
84 changes: 84 additions & 0 deletions ansible/configs/ansible-bootcamp/README.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,84 @@
= OPENTLC OCP-HA-LAB Env_Type config

For example, we will include things such as ec2 instance names, secret
variables such as private/public key pair information, passwords, etc.

Eventually, all sensitive information will be encrypted via Ansible Vault. The
inclusion as well as instructions on doing this will be included in a later
release.

== Set up your "Secret" variables

* You need to provide some credentials for deployments to work
* Create a file called "env_secrets.yml" and put it in the
./ansible/configs/CONFIGNAME/ directory.
** At this point this file has to be created even if no vars from it are used.
* You can choose to provide these values as extra vars (-e "var=value") in the
command line if you prefer not to keep sensitive information in a file.
* In the future we will use ansible vault for this.

.Example contents of "Secret" Vars file
----
# ## Logon credentials for Red Hat Network
# ## Required if using the subscription component
# ## of this playbook.
rhel_subscription_user: ''
rhel_subscription_pass: ''
#
# ## LDAP Bind Password
bindPassword: ''
#
# ## Desired openshift admin name and password
admin_user: ""
admin_user_password: ""
#
# ## AWS Credentials. This is required.
aws_access_key_id: ""
aws_secret_access_key: ""
----


== Review the Env_Type variable file

* This file link:./env_vars.yml[./env_vars.yml] contains all the variables you
need to define to control the deployment of your environment.

== Running Ansible Playbook

. You can run the playbook with the following arguments to overwrite the default variable values:
[source,bash]
----
# Set the your environment variables (this is optional, but makes life easy)
REGION=ap-southeast-1
KEYNAME=ocpkey
GUID=ansibletest1
ENVTYPE="ansible-bootcamp"
CLOUDPROVIDER=ec2
HOSTZONEID='Z3IHLWJZOU9SRT'
#REPO_PATH='https://admin.example.com/repos/ocp/3.5/'
BASESUFFIX='.example.opentlc.com'
#IPAPASS=aaaaaa
REPO_VERSION=3.5
TOWER_COUNT=3
TOWERDB_COUNT=3
## For a HA environment that is not installed with OpenShift
time ansible-playbook -i ./inventory/ ./main.yml \
-e "guid=${GUID}" -e "env_type=${ENVTYPE}" -e "cloud_provider=${CLOUDPROVIDER}" \
-e "aws_region=${REGION}" -e "HostedZoneId=${HOSTZONEID}" -e "key_name=${KEYNAME}" \
-e "subdomain_base_suffix=${BASESUFFIX}" \
-e "tower_instance_count=${TOWER_COUNT}" -e "towerdb_instance_count=${TOWERDB_COUNT}" \
-e "software_to_deploy=none" -e "tower_run=false"
. To Delete an environment
----

#To Destroy an Env
ansible-playbook ./configs/${ENVTYPE}/destroy_env.yml \
-e "guid=${GUID}" -e "env_type=${ENVTYPE}" -e "cloud_provider=${CLOUDPROVIDER}" -e "aws_region=${REGION}" \
-e "HostedZoneId=${HOSTZONEID}" -e "key_name=${KEYNAME}" -e "subdomain_base_suffix=${BASESUFFIX}"

----
46 changes: 46 additions & 0 deletions ansible/configs/ansible-bootcamp/destroy_env.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
- name: Starting environment deployment
hosts: localhost
connection: local
gather_facts: False
become: no
vars_files:
- "./env_vars.yml"
- "./env_secret_vars.yml"

tasks:
# - name: get internal dns zone id if not provided
# environment:
# AWS_ACCESS_KEY_ID: "{{aws_access_key_id}}"
# AWS_SECRET_ACCESS_KEY: "{{aws_secret_access_key}}"
# AWS_DEFAULT_REGION: "{{aws_region}}"
# shell: "aws route53 list-hosted-zones-by-name --region={{aws_region}} --dns-name={{guid}}.internal. --output text --query='HostedZones[*].Id' | awk -F'/' '{print $3}'"
# register: internal_zone_id_register
# - debug:
# var: internal_zone_id_register
# - name: Store internal route53 ID
# set_fact:
# internal_zone_id: "{{ internal_zone_id_register.stdout }}"
# when: 'internal_zone_id_register is defined'
# - name: delete internal dns names
# environment:
# AWS_ACCESS_KEY_ID: "{{aws_access_key_id}}"
# AWS_SECRET_ACCESS_KEY: "{{aws_secret_access_key}}"
# AWS_DEFAULT_REGION: "{{aws_region}}"
# shell: "aws route53 change-resource-record-sets --hosted-zone-id {{internal_zone_id}} --change-batch file://../../workdir/internal_dns-{{ env_type }}-{{ guid }}_DELETE.json --region={{aws_region}}"
# ignore_errors: true
# tags:
# - internal_dns_delete
# when: internal_zone_id is defined


- name: Destroy cloudformation template
cloudformation:
stack_name: "{{project_tag}}"
state: "absent"
region: "{{aws_region}}"
disable_rollback: false
template: "../../workdir/ec2_cloud_template.{{ env_type }}.{{ guid }}.json"
tags:
Stack: "project {{env_type}}-{{ guid }}"
tags: [ destroying, destroy_cf_deployment ]
## we need to add something to delete the env specific key.
Loading

0 comments on commit 25e039e

Please sign in to comment.