-
Notifications
You must be signed in to change notification settings - Fork 40
AWS
This document will take you through steps of creating and configuring the database cluster ( 3 nodes, 1 Master and 2 standby ) in your AWS account. It also provides scripts to deploy EDB tools like EDB Failover Manager (EFM), Postgres Enterprise Manager (PEM), EDB Backup and Recovery Tool (BART). We have used Terraform for the provisioning of the infrastructure and Ansible for configuration of EDB software on the provisioned nodes.
We have used centos7 as base AMI for the creation of an instance.
- Terraform installed on your machine
- Ansible installed on your machine.
- IAM users with programmatic access.
- VPC created on your AWS account in the region of your choice.
- Minimum 3 subnets created in your VPC with public IP auto-enabled.
- Key pair created in advance for ssh
- Subscription for EDB yum repository (If using EDB Postgres)
- Optional IAM role created.
- S3 bucket created for backup.
- Subscription to centos 7 AMI on the AWS marketplace. (https://aws.amazon.com/marketplace/pp/Centosorg-CentOS-7-x8664-with-Updates-HVM/B00O7WM7QW)
You can install terraform by following instructions here. We have tested with terraform version 0.12.18.
You can install Ansible by following instruction here
You need to create an IAM user with programmatic access. This IAM user we will require to create a resource on your AWS account.
Minimum permission for this IAM user as follows
- To create instance
- To terminate instance
- Create EIP
- Associate EIP
- Release EIP
- S3 object get and put permission.
Additional permission (optional) to attach the IAM role to the ec2 instance. This IAM role we need in EFM. In case of failover, we will detach EIP from the master node and attach it to the new master node.
You need to create VPC in advance with a minimum of 3 subnets. We create a resource in these 3 different subnets to achieve high availability.
You need to create and download pem file at your local system where you have Terraform and Ansible installed. We need .pem file absolute path and name in terraform input file.
If you are installing EDB Postgresql you need EDB yum repository credentials to download software. This you have to provide in a terraform input file.
The following steps will create 3 node clusters.
Download the deployment scripts from the git repository. The file will download in the folder. Go inside DB_Cluster_AWS folder and edit the file with .tf extension using your favorite editor.
Each field has its description commented out.
Modify the following mandatory fields in the terraform config file.
- instance_type
- ssh_keypair
- ssh_key_path
- subnet_id
- dbengine
- replication_password
- vpc_id
- s3bucket
Except for subnet_id, all the fields accept a single string value. Subnet_id field requires a list of comma-separated values in the square bracket.
For example: subnet_id = [“id1”, “id2”, “id3”]
Along with these mandatory fields, there are optional fields that are as follows.
- EDB_yumrepo_username
- EDB_yumrepo_password
- iam-instance-profile
- custom_security_group_id
- replication_type
- db_user
- db_password
- cluster_name
Terraform config file downloaded has a description of these fields. Here we are providing a more detailed explanation of these fields.
AWS gives us the option to select machine type as per your requirements. There are a wide range of instance types you can select and provide in terraform config file. Terraform will create an instance of that type. For example: t2.micro, m4.xlarge etc
When you create a key pair in aws, you have to provide the name to that key pair, the same name you have to provide in this field. Please do not include the .pem extension
This is an absolute path of your key pair ended with .pem. Eg /usr/kanchan/pgdeploy.pem Please make sure you changed permission of this file to either 0400 or 0600 using command chmod 0400 filename.pem
When you create VPC, you need to create subnets as well. You need to provide ID of those subnets here. For example subnet-20bdsj06b. Make sure this is a public subnet with auto public IP enable setting on.
We are providing an option to select different DB engines and create a cluster with it. These DB engines are as follows.
Name | DB engine |
---|---|
For postgresql version 10 | pg10 |
For postgresql version 11 | pg11 |
For postgresql version 12 | pg12 |
For Enterprise postgresql version 10 | epas10 |
For Enterprise postgresql version 11 | epas11 |
For Enterprise postgresql version 12 | epas12 |
We are creating 3 node clusters with replication set up among those. This field will set a replication password for replication role edbrepuser, which we are creating on your behalf and assigning the password of your choice.
This is the AWS VPC ID you have to provide. Terraform will create a resource in this VPC only.
This is s3bucket name which you have to provide for wal archive. This requires you to provide push permission on the server to s3 bucket. We recommend providing this permission using the iam-instance-profile role attached to the instance.
If you have selected Enterprise PostgreSQL as your DB engine, you need to provide this field.
This is a yum repo user password.
As mentioned above if you are installing EFM, in event of failover a fencing script will execute which will detach and attach EIP from the failed master node to new master node.
So instance must have permission to disassociate EIP and associate with other instances. As the IAM role is a secure way to provide this permission we recommend creating an IAM role and provide the name of that role. The resource created using terraform will have this role attached after creation.
Again this field is optional and you can leave it blank however you need to provide this permission to the instance.
You can either attach the IAM role or provide access_key secret_key by logging into the instance and run AWS configure command.
You can attach following IAM permission to this role
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1578576699220",
"Action": [
"ec2:AssociateAddress",
"ec2:DisassociateAddress"
],
"Effect": "Allow",
"Resource": "*"
},
{
"Sid": "Stmt1578576863132",
"Action": [
"s3:GetObject*",
"s3:PutObject*"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucketname/*"
}
]
}
You can create a security group in advance and provide its ID here. With this field, we make sure you whitelist the IP of your choice and open ports required for your DB server. If you leave this field blank we will create a new security group on your behalf with the following rules. Please note if you are creating a custom security group please open below ports.
Port | IP-Range |
---|---|
22 | 0.0.0.0/0 |
5432 | 0.0.0.0/0 |
5444 | 0.0.0.0/0 |
7800-7900 | 0.0.0.0/0 |
This field will ask you to select replication type, synchronous or asynchronous. By default, this is asynchronous which means if you leave this field blank cluster will create with asynchronous replication type.
If you decide to not use default DB username(for Postgresql its postgres and for Enterprise Postgresql its enterprisedb) you can provide the name of dbuser here. We will create a user from this field and assign default password postgres(which you can change later).
This is a custom DB password for your dbuser. If you leave this field blank default password will be postgres.
This is an optional field for tagging your resource. You can leave this field blank and we will use default dbengine value as a tag name.
Once you edit a file and supplied mandatory fields save this file and exit from it. Before executing this file you need to provide some environmental variable as follows
export AWS_ACCESS_KEY_ID=Your_AWS_Access_ID
export AWS_SECRET_ACCESS_KEY=Your_AWS_Secret_Key_ID
This is AWS credentials for creating a resource in your AWS account. Terraform needs this before we execute a terraform config file.
export PATH=$PATH:/path_of_terraform_binary_file_directory
This is an absolute path of terraform binary file you downloaded while installing terraform.
Once you have done this run following command to run terraform config file
terraform init
terraform apply
This will prompt you to enter a region where resources are going to be created. Please supply region-name. Make sure you are supplying the same region name where VPC created and provided in the terraform input config file.
This will again prompt you to enter yes to create the resource. Type yes and hit the enter key. This will start the process of creating and configuring the DB cluster.
If process complete you can see the cluster IP address(both public and private)
If you have the infrastructure ready and want to install/configure DB, you can run ansible playbooks and set that up.
Here are the steps you can follow.
Go to module directory EDB_SR_SETUP/utilities/scripts.
Create hosts file with following content.
master_public_ip ansible_user= ansible_ssh_private_key_file=
slave1_public_ip ansible_user= ansible_ssh_private_key_file=
slave2_public_ip ansible_user= ansible_ssh_private_key_file=
Replace username and path to file with your values.
Use below command to run ansible playbook. Make sure you are providing extra arguments.
ansible-playbook -i hosts setupsr.yml --extra-vars='USER= PASS= EPASDBUSER= PGDBUSER= ip1= ip2= ip3= S3BUCKET= REPLICATION_USER_PASSWORD= DBPASSWORD= REPLICATION_TYPE= DB_ENGINE= MASTER= SLAVE1= SLAVE2='
Where
-
USER is EDB yum repo user name(If using EDB postgres)
-
PASS is EDB yum repo user password
-
EPASDBUSER is DB username for EDB postgres(leave blank if using community postgres)
-
PGDBUSER is DB username for community postgres(leave blank if using EDB postgres)
-
ip1 is internal IP of master server
-
ip2 is internal IP of slave1
-
ip3 is internal IP of slave2
-
S3BUCKET is S3 bucketname followed by foldername(Eg. bucketname/folder name)
-
REPLICATION_USER_PASSWORD is replication user/role password.
-
DBPASSWORD is DB password.
-
REPLICATION_TYPE synchronous or asynchronous
-
DB_ENGINE provide options like pg10 or pg11 or epas10 or epas12
-
MASTER is master server public IP
-
SLAVE1 is slave1 server public IP
-
SLAVE2 is slave2 server public IP
In this document, we will walk you through steps of setting up EFM on the cluster which was created earlier.
Note: Before setting up EFM, you must attach the IAM role or provide access_key/secret_key on the server by running aws configure. In case of failover EFM will run fencing script which will detach and attach EIP from failed master node to new master.
Please refer "iam-instance-profile" option in Create a Database Cluster (3 node - 1Master, 2Standby)
The DB cluster for which EFM we are going to set must be created using EDB deployment scripts.
Download the deployment scripts from the git repository. This will download the directory. Go inside EFM_Setup_AWS directory and open an input file in your favorite editor having .tf extension.
Add the below fields in that file. You can read the description of those fields that are commented out.
- EDB_Yum_Repo_Username
- EDB_Yum_Repo_Password
- notification_email_address
- efm_role_password
- db_user
Though you can find field descriptions in the input file, here are detailed explanations for each of the fields.
This is a mandatory field if you have selected Postgres(pg10,pg11,pg12) as your DB engine. Optional field if you have selected enterprisedb as your dbengine as this field already configured when you have created cluster previously.
Provide an email address where you will receive a notification related to your EFM cluster operation.
We are creating a separate DB role for EFM operation called edbefm and assigning permission to perform operation require in failover management. In this field, you need to provide password for this role.
If you have created a custom DB user while creating the cluster you need to provide a name for that user. This field we require to create an EFM role as we need to fire create role query by connecting to DB.
Once you have supplied all these fields save input file and exit from it.
Run the following command to start setting up EFM cluster
terraform init
terraform apply
This will prompt you to confirm the creation of the cluster. Type yes and hit enter to proceed. Once finished you can see resource created messages on your terminal.
If you have the infrastructure ready and want to set up EFM, you can run ansible playbooks and set that up.
Here are the steps you can follow.
Go to module directory EFM_Setup_AWS/utilities/scripts.
Create hosts file with following content.
master_public_ip ansible_user= ansible_ssh_private_key_file=
slave1_public_ip ansible_user= ansible_ssh_private_key_file=
slave2_public_ip ansible_user= ansible_ssh_private_key_file=
Replace username and path to file with your values.
Use below command to run ansible playbook. Make sure you are providing extra arguments.
ansible-playbook -i hosts setupefm.yml --extra-vars='DB_ENGINE= USER= PASS= DBUSER= EFM_USER_PASSWORD= MASTER= SLAVE1= SLAVE2= ip1= ip2= ip3= NOTIFICATION_EMAIL= REGION_NAME= S3BUCKET='
Where
-
DB_ENGINE provide options like pg10 or pg11 or epas10 or epas12
-
USER is yum repo user name
-
PASS is yum repo user password.
-
DBUSER is DB username.
-
EFM_USER_PASSWORD is password for EFM role.
-
MASTER is master server public IP.
-
SLAVE1 is slave1 public IP.
-
SLAVE2 is slave2 public IP
-
ip1 is master server internal IP.
-
ip2 is slave1 server internal IP.
-
ip3 is slave2 server internal IP.
-
NOTIFICATION_EMAIL is email address for EFM notification.
-
REGION_NAME is AWS region code.
-
S3BUCKET is S3 bucketname followed by foldername(Eg. bucketname/folder name)
Pem monitoring server is used to monitor DB servers. In this document, we will walk you through steps of creating PEM monitoring server
- VPC already created
- Key pair created
- Subnet created in the VPC
Download the deployment scripts from git. This will download the folder. Go inside PEM_Server_AWS folder and open file with .tf extension in your favorite editor.
Provide below fields in the file.
- instance_type
- custom_security_group_id
- ssh_keypair
- ssh_key_path
- subnet_id
- vpc_id
- db_password
- region_name
- EDB_yumrepo_username
- EDB_yumrepo_password
AWS gives us the option to select machine type as per your requirements. There is a wide range of instance types you can select and provide in a terraform config file. Terraform will create an instance of that type. For example: t2.micro, m4.xlarge etc
When you create a key pair in aws, you have to provide the name to that key pair, the same name you have to provide in this field.
This is an absolute path of your key pair ended with .pem. Please make sure you changed permission of this file to either 0400 or 0600 using command chmod 0400 filename.pem
When you create VPC, you need to create subnets as well. You need to provide ID of those subnets here. For example subnet-20bdsj06b
This is the PEM server backend DB password you need to supply. We will install Enterprise Postgresql version 11 as backend database to pemserver.
You can create a security group in advance and provide its ID here. With this field, we make sure you whitelist the IP of your choice and open ports require for your DB server. If you leave this field blank we will create a new security group on your behalf with the following rules.
Please note if you are creating a custom security group please open below ports.
Port | IP-Range |
---|---|
22 | 0.0.0.0/0 |
5432 | 0.0.0.0/0 |
5444 | 0.0.0.0/0 |
8443 | 0.0.0.0/0 |
This is the AWS VPC ID you have to provide. Terraform will create a resource in this VPC only.
This is aws region name where resources are going to be created.
This is a mandatory field. For any EDB tools you need to provide EDB yum repository credentials. Provide the same credentials in these fields.
Once you fill all the details save this file and exit from it.
Run terraform init and then terraform apply command to start creating resources. This will prompt you to enter the aws region. Provide region code like us-east-1 and hit enter. The next prompt will ask your confirmation to create the resource. Type yes and hit the enter key. Terraform start creation of your resource and configuration of PEM server will be done using Ansible.
Once this complete you will see Pem server IP as an output displayed on your screen
Access the PEM server
https://<ip_address_of_PEM_host>:8443/pem
Username: enterprisedb
Password: The DB password you entered in db_password field
If you have the infrastructure ready and want to configure that for PEM monitoring server, you can run ansible playbooks and configure it.
Here are the steps you can follow.
Go to module directory EDB_PEM_Server/utilities/scripts.
Create hosts file with following content.
pem_public_ip ansible_user= ansible_ssh_private_key_file=
Replace username and path to file with your values.
Use below command to run ansible playbook. Make sure you are providing extra arguments.
ansible-playbook -i hosts pemserver.yml --extra-vars='USER= PASS= DB_PASSWORD= PEM_IP='
Where
-
USER is yum repo user name if DB_ENGINE is pg10,pg11,pg12.
-
PASS is yum repo user password if DB_ENGINE is pg10,pg11,pg12.
-
PEM_IP is IP of PEM server.
-
DB_PASSWORD is password for PEM server local DB
To start monitoring DB servers created earlier with PEM monitoring servers we need to register an agent with it. This is pemagent. Following document will explain how we can register a pem agent with a pem server.
- DB cluster created using EDB deployment scripts.
- PEM server created using EDB deployment scripts.
Download deployment scripts from git repo. This will download a folder where your terraform input file is present. Go inside PEM_Agent_AWS directory and open file with .tf extension with your favorite text editor.
Add below fields in the input file.
- EDB_Yum_Repo_Username
- EDB_Yum_Repo_Password
- db_user
- db_password
- pem_web_ui_password
This is EDB yum repository credentials you need to pass if you are using the community version of postgresql that is pg10, pg11, pg12.
This is a remote server DB username. Provide username if you have changed from default username or leave blank if you have not changed that.
This is the DB password of a remote server.
This is a pem monitoring server DB password. Same we are using for login to UI of pem sever.
Once you are done with editing the file save it and exit from it.
Run below command to start process of registering pem agent with pem server
terraform init
terraform apply
This will prompt you to enter yes. Type yes and hit enter. This will start the process of registering a pem agent with the pem server and in last you will see a resource added message on the display.
If you have the infrastructure ready and want to set up pem agent, you can run ansible playbooks and configure it.
Below are the steps you can follow to configure it.
Go to module directory EDB_PEM_AGENT/utilities/scripts.
Create hosts file with following content.
master_public_ip ansible_user= ansible_ssh_private_key_file=
slave1_public_ip ansible_user= ansible_ssh_private_key_file=
slave2_public_ip ansible_user= ansible_ssh_private_key_file=
Use below command to run ansible-playbook. Make sure you are providing extra arguments.
ansible-playbook -i hosts installpemagent.yml --extra-vars='DB_ENGINE= USER= PASS= PEM_IP= DBPASSWORD= PEM_WEB_PASSWORD= EPASDBUSER= PGDBUSER='
Where
-
DB_ENGINE provide options like pg10 or pg11 or epas10 or epas12
-
USER is yum repo user name if DB_ENGINE is pg10,pg11,pg12.
-
PASS is yum repo user password if DB_ENGINE is pg10,pg11,pg12.
-
EPASDBUSER is DB username for EDB Postgres(leave blank if using community Postgres)
-
PGDBUSER is DB username for community Postgres(leave blank if using EDB Postgres)
-
DBPASSWORD is DB password
-
PEM_WEB_PASSWORD PEM server DB password.
-
PEM_IP is IP of PEM server.
We are providing this option to scale your existing DB cluster. This option will not only expand DB cluster by adding additional replicas to the existing cluster but also register newly created instances with pem server and add it to EFM cluster as well. High availability is built in.
In addition we are providing an option to create mixed mode replication type here. Which means now you can add a new server in the existing cluster with synchronous replication mode though the previous cluster was asynchronous. Please make sure if your existing cluster was created with replication type synchronous then leave this field blank.
- DB Cluster set up using EDB deployment scripts.
- PEM server created using EDB deployment scripts.
- EFM cluster set up using EDB deployment scripts.
- VPC created on your AWS account in the region of your choice.
- Key pair created in advance for ssh
- Subnet id
Download deployment scripts from the git repository. A folder will download having a terraform input file. Go inside Expand_DB_Cluster_AWS directory and open file with .tf extension with your favorite text editor.
Add below fields in the input file.
- EDB_yumrepo_username
- EDB_yumrepo_password
- instance_type
- iam-instance-profile
- custom_security_group_id
- subnet_id
- replication_type
- replication_password
- vpc_id
- notification_email_address
- efm_role_password
- pem_web_ui_password
- db_password
- db_user
AWS gives us the option to select machine type as per your requirements. There are a wide range of instance types you can select and provide in terraform config file. Terraform will create an instance of that type. For example: t2.micro, m4.xlarge etc
When you create VPC, you need to create subnets as well. You need to provide ID of those subnets here. For example subnet-20bdsj06b
We are creating 3 node clusters with replication set up among those. This field will set a replication password for replication role edbrepuser, which we are creating on your behalf and assigning the password of your choice.
This is the AWS VPC ID you have to provide. Terraform will create a resource in this VPC only.
If you have selected Enterprise PostgreSQL as your DB engine, you need to provide this field.
This is a yum repo user password.
As mentioned above if you are installing EFM, in event of failover a fencing script will execute which will detach and attach EIP from the failed master node to new master node. So instance must have permission to disassociate EIP and associate with other instances. As the IAM role is a secure way to provide this permission we recommend creating an IAM role and provide the name of that role. The resource created using terraform will have this role attached after creation.
Again this field is optional and you can leave it blank however you need to provide this permission to the instance.
You can either attach the IAM role or provide access_key secret_key by logging into the instance and run AWS configure command.
You can create a security group in advance and provide its ID here. With this field, we make sure you whitelist the IP of your choice and open ports required for your DB server. If you leave this field blank we will create a new security group on your behalf with the following rules. Please note if you are creating a custom security group please open below ports.
Port | IP-Range |
---|---|
22 | 0.0.0.0/0 |
5432 | 0.0.0.0/0 |
5444 | 0.0.0.0/0 |
7800-7900 | 0.0.0.0/0 |
This field will ask you to select replication type, synchronous or asynchronous. By default, this is asynchronous which means if you leave this field blank cluster will create with asynchronous replication type.
Provide an email address where you will receive a notification related to your EFM cluster operation. efm_role_password
We are creating a separate DB role for EFM operation called edbefm and assigning permission to perform operations required in failover management. In this field, you need to provide a password for this role.
This is the DB password of a remote server.
This is DB username if you do not want to go with default username. Leaving this field blank will use default username.
This is a pem monitoring server DB password. Same we are using for login to UI of pem sever.
Once you are done with editing the file save it and exit from it.
Run below command to start process of registering pem agent with pem server
terraform init
terraform apply
This will prompt you to enter a region where resources are going to be created. Please supply region-name. Make sure you are supplying the same region name where VPC created and provided in the terraform input config file. This will again prompt you to enter yes to create the resource. Type yes and hit the enter key. This will start the process of creating and configuring the DB cluster.
If you have the infrastructure ready and want to scale cluster, you can run Ansible playbooks and configure it.
Below are the steps you can follow to configure it.
Go to module directory EDB_ADD_REPLICA_VMWARE/utilities/scripts.
Create hosts file with the following content.
master_ip ansible_user= ansible_ssh_pass= slave1_ip ansible_user= ansible_ssh_pass= slave2_ip ansible_user= ansible_ssh_pass= slave3_ip ansible_user= ansible_ssh_pass=
Replace username and path to file with your values.
Use below command to run ansible-playbook. Make sure you are providing extra arguments.
ansible-playbook -i hosts expandcluster.yml --extra-vars='DB_ENGINE= USER= PASS= PGDBUSER= EPASDBUSER= NEWSLAVE= REPLICATION_USER_PASSWORD= REPLICATION_TYPE= ip1= ip2= ip3= selfip= NOTIFICATION_EMAIL= SLAVE1= SLAVE2= DBPASSWORD= PEM_IP= PEM_WEB_PASSWORD='
Where
-
DB_ENGINE provide options like pg10 or pg11 or epas10 or epas12
-
USER is yum repo user name
-
PASS is yum repo user password.
-
EPASDBUSER is DB username for EDB postgres(leave blank if using community postgres)
-
PGDBUSER is DB username for community postgres(leave blank if using EDB postgres)
-
ip1 is internal IP of master server
-
ip2 is internal IP of slave1
-
ip3 is internal IP of slave2
-
REPLICATION_USER_PASSWORD is replication user/role password.
-
DBPASSWORD is DB password.
-
REPLICATION_TYPE synchronous or asynchronous
-
DB_ENGINE provide options like pg10 or pg11 or epas10 or epas12
-
MASTER is master server public IP
-
SLAVE1 is slave1 server public IP
-
SLAVE2 is slave2 server public IP
-
EFM_USER_PASSWORD is password for EFM role.
-
NOTIFICATION_EMAIL is email address for EFM notification.
-
NEWSLAVE is public IP of new server
-
selfip is internal IP of new server.
-
NOTIFICATION_EMAIL is email address for EFM notification.
-
DBPASSWORD is DB password
-
PEM_WEB_PASSWORD PEM server DB password.
-
PEM_IP is IP of PEM server.
EDB backup and recovery tool(BART) helps users to have separate backup servers for their DB servers. This can serve as DR for your DB backup. In this document we will walk you through steps of creating a BART server using a deployment script for AWS providers.
Note: As Bart needed passwordless authentication between bart and DB server we have handled it in terraform. SSH user on remote server is ‘enterprisedb’ if DB engine is enterprised postgres and ‘postgres’.
- DB server created using deployment script DB_Cluster_AWS
- VPC already created.
- Key pair created.
- Subnet created in the VPC.
- Subscription to EDB yum repository.
- Terraform installed on your machine
- Ansible installed on your machine.
##Installation and configuration:
Download deployment scripts from the git repository. The file will download in the folder. Go inside the BART_AWS folder and edit the file edb_bart_input.tf using your favorite editor.
The file opened has the following input parameters. Each input field has their description commented out.
- EDB_yumrepo_username
- EDB_yumrepo_password
- vpc_id
- subnet_id
- instance_type
- custom_security_group_id
- ssh_keypair
- ssh_key_path
- db_user
- db_password
- retention_period
- size
Description of all the above input parameters given in input file. Here are the detailed description.
If you have selected Enterprise PostgreSQL as your DB engine, you need to provide this field.
This is a yum repo user password.
AWS gives us the option to select machine type as per your requirements. There are a wide range of instance types you can select and provide in terraform config file. Terraform will create an instance of that type. For example: t2.micro, m4.xlarge etc
You can create a security group in advance and provide its ID here. With this field, we make sure you whitelist the IP of your choice and open ports require for your DB server. If you leave this field blank we will create a new security group on your behalf with the following rules. Please note if you are creating a custom security group please open below ports.
Port | IP-Range |
---|---|
22 | 0.0.0.0/0 |
When you create VPC, you need to create subnets as well. You need to provide ID of those subnets here. For example subnet-20bdsj06b
This is the AWS VPC ID you have to provide. Terraform will create a resource in this VPC only.
This is the DB password of a remote server whose backup we are storing on the bart server.
This is DB username of a remote server whose backup we are storing on the bart server.
When you create a key pair in aws, you have to provide the name to that key pair, the same name you have to provide in this field.
This is an absolute path of your key pair ended with .pem. Please make sure you changed permission of this file to either 0400 or 0600 using command chmod 0400 filename.pem
This is the duration where your back will retain on the bart server. The old backup will delete depending on this retention period.
This is size of additional volume for bart server. Remote server backup are going to store on this volume. Provide size double of your DB cluster data size. This size is in GB so if your DB cluster size is 1TB provde size = 1024.
Run below command to start process of registering pem agent with pem server
This will prompt you to enter a region where resources are going to be created. Please supply region-name. Make sure you are supplying the same region name where VPC created and provided in the terraform input config file. This will again prompt you to enter yes to create the resource. Type yes and hit the enter key. This will start the process of creating and configuring the DB cluster.
BART server need to take backup of clusters frequently. With this deployment we have added two schedule jobs for back up. One job will take full backup every sunday at 00 hour 05 min.
Another job will take incremental backup daily at 00 hour 45 min. You can always modify this timing by logging on to BART server.
If you have ec2 instance created and want to configure for BART you can do it using ansible playbook we have created. This playbook will help you to configure BART server without any manual efforts.
Below are the steps you can follow to configure it.
Go inside BART_AWS-->config_bart directory.
Open a hosts file and replace values provided in the file with the actual values of BART server and Database server.
Once you filled all the values run below command to configure BART server.
ansible-playbook -i hosts bartserver.yml --extra-vars='USER= PASS= BART_IP= DB_IP= DB_ENGINE= DB_PASSWORD= DB_USER= RETENTION_PERIOD='
Where
USER is EDB yum repository username to download package for BART
PASS is EDB yum repository user password.
BART_IP is Bart server public IP address
DB_IP is a database server IP from which Bart server going to take backup.
DB_ENGINE is a database engine installed on the Master server. Eg pg12 or epas12
DB_PASSWORD is a database server superuser password.
DB_USER is a database server super username.
RETENTION_PERIOD is days or weeks you want to keep backup on BART server. You can leave this field blank.
Sample ansible-playbook run example is
ansible-playbook -i hosts bartserver.yml --extra-vars='USER=myname PASS=admine@123 BART_IP=50.2.3.5 DB_IP=112.30.1.3 DB_ENGINE=pg12 DB_PASSWORD=postgres DB_USER=postgres RETENTION_PERIOD=1 DAYS'
EDB tools deployment script allows you to create and use all EDB tools together. These EDB tools include three-node DB cluster, EFM setup, PEM server, PEM agent, BART server. We have used all the latest versions of these tools so in one go you can set up all the tools together.
- Terraform installed on your machine
- Ansible installed on your machine.
- IAM users with programmatic access.
- VPC created on your AWS account in the region of your choice.
- Minimum 3 subnets created in your VPC with public IP auto-enabled.
- Key pair created in advance for ssh
- Subscription for EDB yum repository (If using EDB Postgres)
- Optional IAM role created.
- S3 bucket created for backup.
- Subscription to centos 7 AMI on the AWS marketplace. (https://aws.amazon.com/marketplace/pp/Centosorg-CentOS-7-x8664-with-Updates-HVM/B00O7WM7QW)
You can install terraform by following instructions here. We have tested with terraform version 0.12.18.
You can install Ansible by following instruction here
You need to create an IAM user with programmatic access. This IAM user we will require to create a resource on your AWS account.
Minimum permission for this IAM user as follows
- To create an instance
- To terminate an instance
- Create EIP
- Associate EIP
- Release EIP
- S3 object get and put permission.
Additional permission (optional) to attach the IAM role to the ec2 instance. This IAM role we need in EFM. In case of failover, we will detach EIP from the master node and attach it to the new master node.
You need to create VPC in advance with a minimum of 3 subnets. We create a resource in these 3 different subnets to achieve high availability.
You need to create and download pem file at your local system where you have Terraform and Ansible installed. We need .pem file absolute path and name in terraform input file.
You need EDB yum repository credentials to download software. This you have to provide in a terraform input file.
Modify the following mandatory fields in the terraform config file.
- instance_type
- ssh_keypair
- ssh_key_path
- subnet_id
- dbengine
- replication_password
- vpc_id
- s3bucket
- EDB_yumrepo_username
- EDB_yumrepo_password
- efm_role_password
- notification_email_address
- instance_type_pem
- subnet_id_pem
- ssh_keypair_pem
- ssh_key_path_pem
- db_password_pem
- subnet_id_bart
- instance_type_bart
- ssh_keypair_bart
- ssh_key_path_bart
- size
Except for subnet_id, all the fields accept a single string value. Subnet_id field requires a list of comma-separated values in the square bracket.
For example: subnet_id = [“id1”, “id2”, “id3”]
Along with these mandatory fields, there are optional fields that are as follows.
- iam-instance-profile
- custom_security_group_id
- replication_type
- db_user
- db_password
- cluster_name
- custom_security_group_id_pem
- custom_security_group_id_bart
- retention_period
Terraform config file downloaded has a description of these fields. Here we are providing a more detailed explanation of these fields.
AWS gives us the option to select machine type as per your requirements. There are a wide range of instance types you can select and provide in terraform config file. Terraform will create an instance of that type. For example: t2.micro, m4.xlarge etc
When you create a key pair in aws, you have to provide the name to that key pair, the same name you have to provide in this field. Please do not include the .pem extension
This is an absolute path of your key pair ended with .pem. Eg /usr/kanchan/pgdeploy.pem Please make sure you changed permission of this file to either 0400 or 0600 using command chmod 0400 filename.pem
When you create VPC, you need to create subnets as well. You need to provide ID of those subnets here. For example subnet-20bdsj06b. Make sure this is a public subnet with auto public IP enables setting on.
We are creating 3 node clusters with replication set up among those. This field will set a replication password for replication role edbrepuser, which we are creating on your behalf and assigning the password of your choice.
This is the AWS VPC ID you have to provide. Terraform will create a resource in this VPC only.
This is s3bucket name which you have to provide for wal archive. This requires you to provide push permission on the server to s3 bucket. We recommend providing this permission using the iam-instance-profile role attached to the instance.
If you have selected Enterprise PostgreSQL as your DB engine, you need to provide this field.
This is a yum repo user password.
As mentioned above if you are installing EFM, in event of failover a fencing script will execute which will detach and attach EIP from the failed master node to new master node.
So instance must have permission to disassociate EIP and associate with other instances. As the IAM role is a secure way to provide this permission we recommend creating an IAM role and provide the name of that role. The resource created using terraform will have this role attached after creation.
Again this field is optional and you can leave it blank however you need to provide this permission to the instance.
You can either attach the IAM role or provide access_key secret_key by logging into the instance and run AWS configure command.
You can attach following IAM permission to this role
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1578576699220",
"Action": [
"ec2:AssociateAddress",
"ec2:DisassociateAddress"
],
"Effect": "Allow",
"Resource": "*"
},
{
"Sid": "Stmt1578576863132",
"Action": [
"s3:GetObject*",
"s3:PutObject*"
],
"Effect": "Allow",
"Resource": "arn:aws:s3:::bucketname/*"
}
]
}
You can create a security group in advance and provide its ID here. With this field, we make sure you whitelist the IP of your choice and open ports required for your DB server. If you leave this field blank we will create a new security group on your behalf with the following rules. Please note if you are creating a custom security group please open below ports.
Port | IP-Range |
---|---|
22 | 0.0.0.0/0 |
5432 | 0.0.0.0/0 |
5444 | 0.0.0.0/0 |
7800-7900 | 0.0.0.0/0 |
This field will ask you to select replication type, synchronous or asynchronous. By default, this is asynchronous which means if you leave this field blank cluster will create with asynchronous replication type.
If you decide to not use default DB username(for Postgresql its postgres and for Enterprise Postgresql its enterprisedb) you can provide the name of dbuser here. We will create a user from this field and assign default password postgres(which you can change later).
This is a custom DB password for your dbuser. If you leave this field blank default password will be postgres.
This is an optional field for tagging your resource. You can leave this field blank and we will use default dbengine value as a tag name.
We are creating a separate role for EFM operations. Role name is efmadmin, here you need to provide password for that role.
Provide an email address to receive cluster health notification or any change in status.
We are creating separate PEM monitoring server so in this field you need to provide instance type like t2.micro m4.xlarge etc.
You can create security group in advance and for pem monitoring server. Here provide security group ID. If you leave this field blank we will create security group on your behalf. If you are creating security group make sure you open 8443 and 5444 port open.
Here you need to provide subnet id for pem monitoring server.
Provide ssh key pair name for PEM monitoring server. This is only name of key pair and #### should not ended with .pem
This is the absolute path of your pem file. For example /Users/Documents/demo.pem
Here you need to provide pem monitoring server password.
We are creating a BART server separately. Here you need to provide subnet id for the BART server.
Provide instance id for BART server like t2.xlarge, m4.large etc.
Here you can provide security group ID. This is an optional field and you can leave it blank. If this field is blank we will create new security group on your behalf.
Provide ssh key pair name for BART server. This is only name of key pair and #### should not ended with .pem
This is the absolute path of your pem file. For example /Users/Documents/demo.pem
This determines when an active backup should be marked as obsolete. You can specify the retention policy either in terms of number of backup or in terms of duration (days, weeks, or months). eg 3 MONTHS or 7 DAYS or 1 WEEK. Leave it blank if you dont want to specify retention period.
We are attaching additional EBS volume to store the backup. Here you need to provide the size in GB for that volume. Make sure you are attaching volume with size double of your DB size.
Run below command to start process of creating EDB tools.
This will prompt you to enter a region where resources are going to be created. Please supply region code. Make sure you are supplying the same region name where VPC created and provided in the terraform input config file. This will again prompt you to enter yes to create the resource. Type yes and hit the enter key. This will start the process of creating and configuring EDB tools.