Skip to content
Kanchan Mohitey edited this page Mar 21, 2020 · 14 revisions

AWS

Introduction

This document will take you through steps of creating and configuring the database cluster ( 3 nodes, 1 Master and 2 standby ) in your AWS account. It also provides scripts to deploy EDB tools like EDB Failover Manager (EFM), Postgres Enterprise Manager (PEM), etc. We have used Terraform for the provisioning of the infrastructure and Ansible for configuration of EDB software on the provisioned nodes.

We have used centos7 as base AMI for the creation of an instance.

Prerequisite

  • Terraform installed on your machine
  • Ansible installed on your machine.
  • IAM users with programmatic access.
  • VPC created on your AWS account in the region of your choice.
  • Minimum 3 subnets created in your VPC with public IP auto-enabled.
  • Key pair created in advance for ssh
  • Subscription for EDB yum repository (If using EDB Postgres)
  • Optional IAM role created.
  • S3 bucket created for backup.
  • Subscription to centos 7 AMI on the AWS marketplace. (https://aws.amazon.com/marketplace/pp/Centosorg-CentOS-7-x8664-with-Updates-HVM/B00O7WM7QW)

Install Terraform

You can install terraform by following instructions here. We have tested with terraform version 0.12.18.

Install Ansible

You can install Ansible by following instruction here

Create an IAM user

You need to create an IAM user with programmatic access. This IAM user we will require to create a resource on your AWS account.

Minimum permission for this IAM user as follows

  • To create instance
  • To terminate instance
  • Create EIP
  • Associate EIP
  • Release EIP
  • S3 object get and put permission.

Additional permission (optional) to attach the IAM role to the ec2 instance. This IAM role we need in EFM. In case of failover, we will detach EIP from the master node and attach it to the new master node.

Create VPC in your AWS account:

You need to create VPC in advance with a minimum of 3 subnets. We create a resource in these 3 different subnets to achieve high availability.

PEM file for ssh:

You need to create and download pem file at your local system where you have Terraform and Ansible installed. We need .pem file absolute path and name in terraform input file.

Subscription for EDB yum repository:

If you are installing EDB Postgresql you need EDB yum repository credentials to download software. This you have to provide in a terraform input file.

Create a Database Cluster (3 node - 1Master, 2Standby)

The following steps will create 3 node clusters.

Download the deployment scripts from the git repository. The file will download in the folder. Go inside DB_Cluster_AWS folder and edit the file with .tf extension using your favorite editor.

Each field has its description commented out.

Modify the following mandatory fields in the terraform config file.

  • instance_type
  • ssh_keypair
  • ssh_key_path
  • subnet_id
  • dbengine
  • replication_password
  • vpc_id
  • s3bucket

Except for subnet_id, all the fields accept a single string value. Subnet_id field requires a list of comma-separated values in the square bracket.

For example: subnet_id = [“id1”, “id2”, “id3”]

Along with these mandatory fields, there are optional fields that are as follows.

  • EDB_yumrepo_username
  • EDB_yumrepo_password
  • iam-instance-profile
  • custom_security_group_id
  • replication_type
  • db_user
  • db_password

Terraform config file downloaded has a description of these fields. Here we are providing a more detailed explanation of these fields.

Instance_type

AWS gives us the option to select machine type as per your requirements. There are a wide range of instance types you can select and provide in terraform config file. Terraform will create an instance of that type. For example: t2.micro, m4.xlarge etc

ssh_keypair

When you create a key pair in aws, you have to provide the name to that key pair, the same name you have to provide in this field. Please do not include the .pem extension

ssh_key_path

This is an absolute path of your key pair ended with .pem. Eg /usr/kanchan/pgdeploy.pem Please make sure you changed permission of this file to either 0400 or 0600 using command chmod 0400 filename.pem

subnet_id

When you create VPC, you need to create subnets as well. You need to provide ID of those subnets here. For example subnet-20bdsj06b. Make sure this is a public subnet with auto public IP enable setting on.

dbengine

We are providing an option to select different DB engines and create a cluster with it. These DB engines are as follows.

Name DB engine
For postgresql version 10 pg10
For postgresql version 11 pg11
For postgresql version 12 pg12
For Enterprise postgresql version 10 epas10
For Enterprise postgresql version 11 epas11
For Enterprise postgresql version 12 epas12

replication_password

We are creating 3 node clusters with replication set up among those. This field will set a replication password for replication role edbrepuser, which we are creating on your behalf and assigning the password of your choice.

vpc_id

This is the AWS VPC ID you have to provide. Terraform will create a resource in this VPC only.

s3bucket

This is s3bucket name which you have to provide for wal archive. This requires you to provide push permission on the server to s3 bucket. We recommend providing this permission using the iam-instance-profile role attached to the instance.

EDB_yumrepo_username

If you have selected Enterprise PostgreSQL as your DB engine, you need to provide this field.

EDB_yumrepo_password

This is a yum repo user password.

iam-instance-profile

As mentioned above if you are installing EFM, in event of failover a fencing script will execute which will detach and attach EIP from the failed master node to new master node.

So instance must have permission to disassociate EIP and associate with other instances. As the IAM role is a secure way to provide this permission we recommend creating an IAM role and provide the name of that role. The resource created using terraform will have this role attached after creation.

Again this field is optional and you can leave it blank however you need to provide this permission to the instance.

You can either attach the IAM role or provide access_key secret_key by logging into the instance and run AWS configure command.

You can attach following IAM permission to this role

 {
"Version": "2012-10-17",
"Statement": [
    {
        "Sid": "Stmt1578576699220",
        "Action": [
            "ec2:AssociateAddress",
            "ec2:DisassociateAddress"
        ],
        "Effect": "Allow",
        "Resource": "*"
    },
    {
        "Sid": "Stmt1578576863132",
        "Action": [
            "s3:GetObject*",
            "s3:PutObject*"
        ],
        "Effect": "Allow",
        "Resource": "arn:aws:s3:::bucketname/*"
    }
]

}

custom_security_group_id

You can create a security group in advance and provide its ID here. With this field, we make sure you whitelist the IP of your choice and open ports required for your DB server. If you leave this field blank we will create a new security group on your behalf with the following rules. Please note if you are creating a custom security group please open below ports.

Port IP-Range
22 0.0.0.0/0
5432 0.0.0.0/0
5444 0.0.0.0/0
7800-7900 0.0.0.0/0

replication_type

This field will ask you to select replication type, synchronous or asynchronous. By default, this is asynchronous which means if you leave this field blank cluster will create with asynchronous replication type.

db_user

If you decide to not use default DB username(for Postgresql its postgres and for Enterprise Postgresql its enterprisedb) you can provide the name of dbuser here. We will create a user from this field and assign default password postgres(which you can change later).

db_password

This is a custom DB password for your dbuser. If you leave this field blank default password will be postgres.

Deploy

Once you edit a file and supplied mandatory fields save this file and exit from it. Before executing this file you need to provide some environmental variable as follows

export AWS_ACCESS_KEY_ID=Your_AWS_Access_ID

export AWS_SECRET_ACCESS_KEY=Your_AWS_Secret_Key_ID

This is AWS credentials for creating a resource in your AWS account. Terraform needs this before we execute a terraform config file.

export PATH=$PATH:/path_of_terraform_binary_file_directory

This is an absolute path of terraform binary file you downloaded while installing terraform.

Once you have done this run following command to run terraform config file

terraform init

terraform apply

This will prompt you to enter a region where resources are going to be created. Please supply region-name. Make sure you are supplying the same region name where VPC created and provided in the terraform input config file.

This will again prompt you to enter yes to create the resource. Type yes and hit the enter key. This will start the process of creating and configuring the DB cluster.

If process complete you can see the cluster IP address(both public and private)

Ansible Deployment For 3 Node Cluster.

If you have the infrastructure ready and want to install/configure DB, you can run ansible playbooks and set that up.

Here are the steps you can follow.

Go to module directory EDB_SR_SETUP/utilities/scripts.

Create hosts file with following content.

master_public_ip ansible_user= ansible_ssh_private_key_file=

slave1_public_ip ansible_user= ansible_ssh_private_key_file=

slave2_public_ip ansible_user= ansible_ssh_private_key_file=

Replace username and path to file with your values.

Use below command to run ansible playbook. Make sure you are providing extra arguments.

ansible-playbook -i hosts setupsr.yml --extra-vars='USER= PASS= EPASDBUSER= PGDBUSER= ip1= ip2= ip3= S3BUCKET= REPLICATION_USER_PASSWORD= DBPASSWORD= REPLICATION_TYPE= DB_ENGINE= MASTER= SLAVE1= SLAVE2='

Where

  • USER is EDB yum repo user name(If using EDB postgres)

  • PASS is EDB yum repo user password

  • EPASDBUSER is DB username for EDB postgres(leave blank if using community postgres)

  • PGDBUSER is DB username for community postgres(leave blank if using EDB postgres)

  • ip1 is internal IP of master server

  • ip2 is internal IP of slave1

  • ip3 is internal IP of slave2

  • S3BUCKET is S3 bucketname followed by foldername(Eg. bucketname/folder name)

  • REPLICATION_USER_PASSWORD is replication user/role password.

  • DBPASSWORD is DB password.

  • REPLICATION_TYPE synchronous or asynchronous

  • DB_ENGINE provide options like pg10 or pg11 or epas10 or epas12

  • MASTER is master server public IP

  • SLAVE1 is slave1 server public IP

  • SLAVE2 is slave2 server public IP

SetUp EFM Cluster

Introduction

In this document, we will walk you through steps of setting up EFM on the cluster which was created earlier.

Note: Before setting up EFM, you must attach the IAM role or provide access_key/secret_key on the server by running aws configure. In case of failover EFM will run fencing script which will detach and attach EIP from failed master node to new master.

Please refer "iam-instance-profile" option in Create a Database Cluster (3 node - 1Master, 2Standby)

Prerequisite

The DB cluster for which EFM we are going to set must be created using EDB deployment scripts.

Installation and configuration

Download the deployment scripts from the git repository. This will download the directory. Go inside EFM_Setup_AWS directory and open an input file in your favorite editor having .tf extension.

Add the below fields in that file. You can read the description of those fields that are commented out.

  • EDB_Yum_Repo_Username
  • EDB_Yum_Repo_Password
  • notification_email_address
  • efm_role_password
  • db_user

Though you can find field descriptions in the input file, here are detailed explanations for each of the fields.

EDB_Yum_Repo_Username and EDB_Yum_Repo_Password

This is a mandatory field if you have selected Postgres(pg10,pg11,pg12) as your DB engine. Optional field if you have selected enterprisedb as your dbengine as this field already configured when you have created cluster previously.

notification_email_address

Provide an email address where you will receive a notification related to your EFM cluster operation.

efm_role_password

We are creating a separate DB role for EFM operation called edbefm and assigning permission to perform operation require in failover management. In this field, you need to provide password for this role.

db_user

If you have created a custom DB user while creating the cluster you need to provide a name for that user. This field we require to create an EFM role as we need to fire create role query by connecting to DB.

Once you have supplied all these fields save input file and exit from it.

Deploy

Run the following command to start setting up EFM cluster

terraform init

terraform apply

This will prompt you to confirm the creation of the cluster. Type yes and hit enter to proceed. Once finished you can see resource created messages on your terminal.

Ansible Deployment For 3 Node Cluster.

If you have the infrastructure ready and want to set up EFM, you can run ansible playbooks and set that up.

Here are the steps you can follow.

Go to module directory EFM_Setup_AWS/utilities/scripts.

Create hosts file with following content.

master_public_ip ansible_user= ansible_ssh_private_key_file=

slave1_public_ip ansible_user= ansible_ssh_private_key_file=

slave2_public_ip ansible_user= ansible_ssh_private_key_file=

Replace username and path to file with your values.

Use below command to run ansible playbook. Make sure you are providing extra arguments.

ansible-playbook - hosts setupefm.yml --extra-vars='DB_ENGINE= USER= PASS= DBUSER= EFM_USER_PASSWORD= MASTER= SLAVE1= SLAVE2= ip1= ip2= ip3= NOTIFICATION_EMAIL= REGION_NAME= S3BUCKET='

Where

  • DB_ENGINE provide options like pg10 or pg11 or epas10 or epas12

  • USER is yum repo user name

  • PASS is yum repo user password.

  • DBUSER is DB username.

  • EFM_USER_PASSWORD is password for EFM role.

  • MASTER is master server public IP.

  • SLAVE1 is slave1 public IP.

  • SLAVE2 is slave2 public IP

  • ip1 is master server internal IP.

  • ip2 is slave1 server internal IP.

  • ip3 is slave2 server internal IP.

  • NOTIFICATION_EMAIL is email address for EFM notification.

  • REGION_NAME is AWS region code.

  • S3BUCKET is S3 bucketname followed by foldername(Eg. bucketname/folder name)

Create a PEM monitoring server

Introduction

Pem monitoring server is used to monitor DB servers. In this document, we will walk you through steps of creating PEM monitoring server

Prerequisite

  • VPC already created
  • Key pair created
  • Subnet created in the VPC

Installation and configuration

Download the deployment scripts from git. This will download the folder. Go inside PEM_Server_AWS folder and open file with .tf extension in your favorite editor.

Provide below fields in the file.

  • instance_type
  • custom_security_group_id
  • ssh_keypair
  • ssh_key_path
  • subnet_id
  • vpc_id
  • db_password
  • region_name
  • EDB_yumrepo_username
  • EDB_yumrepo_password

instance_type

AWS gives us the option to select machine type as per your requirements. There is a wide range of instance types you can select and provide in a terraform config file. Terraform will create an instance of that type. For example: t2.micro, m4.xlarge etc

ssh_keypair

When you create a key pair in aws, you have to provide the name to that key pair, the same name you have to provide in this field.

ssh_key_path

This is an absolute path of your key pair ended with .pem. Please make sure you changed permission of this file to either 0400 or 0600 using command chmod 0400 filename.pem

subnet_id

When you create VPC, you need to create subnets as well. You need to provide ID of those subnets here. For example subnet-20bdsj06b

db_password

This is the PEM server backend DB password you need to supply. We will install Enterprise Postgresql version 11 as backend database to pemserver.

custom_security_group_id

You can create a security group in advance and provide its ID here. With this field, we make sure you whitelist the IP of your choice and open ports require for your DB server. If you leave this field blank we will create a new security group on your behalf with the following rules.

Please note if you are creating a custom security group please open below ports.

Port IP-Range
22 0.0.0.0/0
5432 0.0.0.0/0
5444 0.0.0.0/0
8443 0.0.0.0/0

vpc_id

This is the AWS VPC ID you have to provide. Terraform will create a resource in this VPC only.

region_name

This is aws region name where resources are going to be created.

EDB_Yum_Repo_Username and EDB_Yum_Repo_Password

This is a mandatory field. For any EDB tools you need to provide EDB yum repository credentials. Provide the same credentials in these fields.

Deploy

Once you fill all the details save this file and exit from it.

Run terraform init and then terraform apply command to start creating resources. This will prompt you to enter the aws region. Provide region code like us-east-1 and hit enter. The next prompt will ask your confirmation to create the resource. Type yes and hit the enter key. Terraform start creation of your resource and configuration of PEM server will be done using Ansible.

Once this complete you will see Pem server IP as an output displayed on your screen

Access the PEM server

https://<ip_address_of_PEM_host>:8443/pem

Username: enterprisedb

Password: The DB password you entered in db_password field

Ansible Deployment For PEM monitoring server.

If you have the infrastructure ready and want to configure that for PEM monitoring server, you can run ansible playbooks and configure it.

Here are the steps you can follow.

Go to module directory EDB_PEM_Server/utilities/scripts.

Create hosts file with following content.

pem_public_ip ansible_user= ansible_ssh_private_key_file=

Replace username and path to file with your values.

Use below command to run ansible playbook. Make sure you are providing extra arguments.

ansible-playbook -i hosts pemserver.yml --extra-vars='USER= PASS= DB_PASSWORD= PEM_IP='

Where

  • USER is yum repo user name if DB_ENGINE is pg10,pg11,pg12.

  • PASS is yum repo user password if DB_ENGINE is pg10,pg11,pg12.

  • PEM_IP is IP of PEM server.

  • DB_PASSWORD is password for PEM server local DB

Register PEM Agent with PEM server

Introduction

To start monitoring DB servers created earlier with PEM monitoring servers we need to register an agent with it. This is pemagent. Following document will explain how we can register a pem agent with a pem server.

Prerequisite

  • DB cluster created using EDB deployment scripts.
  • PEM server created using EDB deployment scripts.

Installation & Configuration

Download deployment scripts from git repo. This will download a folder where your terraform input file is present. Go inside PEM_Agent_AWS directory and open file with .tf extension with your favorite text editor.

Add below fields in the input file.

  • EDB_Yum_Repo_Username
  • EDB_Yum_Repo_Password
  • db_user
  • db_password
  • pem_web_ui_password

EDB_Yum_Repo_Username & EDB_Yum_Repo_Password

This is EDB yum repository credentials you need to pass if you are using the community version of postgresql that is pg10, pg11, pg12.

db_user

This is a remote server DB username. Provide username if you have changed from default username or leave blank if you have not changed that.

db_password

This is the DB password of a remote server.

pem_web_ui_password.

This is a pem monitoring server DB password. Same we are using for login to UI of pem sever.

Once you are done with editing the file save it and exit from it.

Deploy

Run below command to start process of registering pem agent with pem server

terraform init

terraform apply

This will prompt you to enter yes. Type yes and hit enter. This will start the process of registering a pem agent with the pem server and in last you will see a resource added message on the display.

Ansible Deployment For PEM Agent.

If you have the infrastructure ready and want to set up pem agent, you can run ansible playbooks and configure it.

Below are the steps you can follow to configure it.

Go to module directory EDB_PEM_AGENT/utilities/scripts.

Create hosts file with following content.

master_public_ip ansible_user= ansible_ssh_private_key_file=

slave1_public_ip ansible_user= ansible_ssh_private_key_file=

slave2_public_ip ansible_user= ansible_ssh_private_key_file=

Use below command to run ansible-playbook. Make sure you are providing extra arguments.

ansible-playbook -i hosts installpemagent.yml --extra-vars='DB_ENGINE= USER= PASS= PEM_IP= DBPASSWORD= PEM_WEB_PASSWORD= EPASDBUSER= PGDBUSER='

Where

  • DB_ENGINE provide options like pg10 or pg11 or epas10 or epas12

  • USER is yum repo user name if DB_ENGINE is pg10,pg11,pg12.

  • PASS is yum repo user password if DB_ENGINE is pg10,pg11,pg12.

  • EPASDBUSER is DB username for EDB Postgres(leave blank if using community Postgres)

  • PGDBUSER is DB username for community Postgres(leave blank if using EDB Postgres)

  • DBPASSWORD is DB password

  • PEM_WEB_PASSWORD PEM server DB password.

  • PEM_IP is IP of PEM server.

Scale the DB Cluster

Introduction:

We are providing this option to scale your existing DB cluster. This option will not only expand DB cluster by adding additional replicas to the existing cluster but also register newly created instances with pem server and add it to EFM cluster as well. High availability is built in.

In addition we are providing an option to create mixed mode replication type here. Which means now you can add a new server in the existing cluster with synchronous replication mode though the previous cluster was asynchronous. Please make sure if your existing cluster was created with replication type synchronous then leave this field blank.

Prerequisite:

  • DB Cluster set up using EDB deployment scripts.
  • PEM server created using EDB deployment scripts.
  • EFM cluster set up using EDB deployment scripts.
  • VPC created on your AWS account in the region of your choice.
  • Key pair created in advance for ssh
  • Subnet id

Installation & Configuration:

Download deployment scripts from the git repository. A folder will download having a terraform input file. Go inside Expand_DB_Cluster_AWS directory and open file with .tf extension with your favorite text editor.

Add below fields in the input file.

  • EDB_yumrepo_username
  • EDB_yumrepo_password
  • instance_type
  • iam-instance-profile
  • custom_security_group_id
  • subnet_id
  • replication_type
  • replication_password
  • vpc_id
  • notification_email_address
  • efm_role_password
  • pem_web_ui_password
  • db_password
  • db_user

Instance_type

AWS gives us the option to select machine type as per your requirements. There are a wide range of instance types you can select and provide in terraform config file. Terraform will create an instance of that type. For example: t2.micro, m4.xlarge etc

subnet_id

When you create VPC, you need to create subnets as well. You need to provide ID of those subnets here. For example subnet-20bdsj06b

replication_password

We are creating 3 node clusters with replication set up among those. This field will set a replication password for replication role edbrepuser, which we are creating on your behalf and assigning the password of your choice.

vpc_id

This is the AWS VPC ID you have to provide. Terraform will create a resource in this VPC only.

EDB_yumrepo_username

If you have selected Enterprise PostgreSQL as your DB engine, you need to provide this field.

EDB_yumrepo_password

This is a yum repo user password.

iam-instance-profile

As mentioned above if you are installing EFM, in event of failover a fencing script will execute which will detach and attach EIP from the failed master node to new master node. So instance must have permission to disassociate EIP and associate with other instances. As the IAM role is a secure way to provide this permission we recommend creating an IAM role and provide the name of that role. The resource created using terraform will have this role attached after creation.

Again this field is optional and you can leave it blank however you need to provide this permission to the instance.

You can either attach the IAM role or provide access_key secret_key by logging into the instance and run AWS configure command.

custom_security_group_id

You can create a security group in advance and provide its ID here. With this field, we make sure you whitelist the IP of your choice and open ports required for your DB server. If you leave this field blank we will create a new security group on your behalf with the following rules. Please note if you are creating a custom security group please open below ports.

Port IP-Range
22 0.0.0.0/0
5432 0.0.0.0/0
5444 0.0.0.0/0
7800-7900 0.0.0.0/0

replication_type

This field will ask you to select replication type, synchronous or asynchronous. By default, this is asynchronous which means if you leave this field blank cluster will create with asynchronous replication type.

notification_email_address

Provide an email address where you will receive a notification related to your EFM cluster operation. efm_role_password

We are creating a separate DB role for EFM operation called edbefm and assigning permission to perform operations required in failover management. In this field, you need to provide a password for this role.

db_password

This is the DB password of a remote server.

db_user

This is DB username if you do not want to go with default username. Leaving this field blank will use default username.

pem_web_ui_password.

This is a pem monitoring server DB password. Same we are using for login to UI of pem sever.

Once you are done with editing the file save it and exit from it.

Deploy

Run below command to start process of registering pem agent with pem server

terraform init

terraform apply

This will prompt you to enter a region where resources are going to be created. Please supply region-name. Make sure you are supplying the same region name where VPC created and provided in the terraform input config file. This will again prompt you to enter yes to create the resource. Type yes and hit the enter key. This will start the process of creating and configuring the DB cluster.

Ansible Deployment For Scale Existing Cluster.

If you have the infrastructure ready and want to scale cluster, you can run Ansible playbooks and configure it.

Below are the steps you can follow to configure it.

Go to module directory EDB_ADD_REPLICA_VMWARE/utilities/scripts.

Create hosts file with the following content.

master_ip ansible_user= ansible_ssh_pass= slave1_ip ansible_user= ansible_ssh_pass= slave2_ip ansible_user= ansible_ssh_pass= slave3_ip ansible_user= ansible_ssh_pass=

Replace username and path to file with your values.

Use below command to run ansible-playbook. Make sure you are providing extra arguments.

ansible-playbook -i hosts expandcluster.yml --extra-vars='DB_ENGINE= USER= PASS= PGDBUSER= EPASDBUSER= NEWSLAVE= REPLICATION_USER_PASSWORD= REPLICATION_TYPE= ip1= ip2= ip3= selfip= NOTIFICATION_EMAIL= SLAVE1= SLAVE2= DBPASSWORD= PEM_IP= PEM_WEB_PASSWORD='

Where

  • DB_ENGINE provide options like pg10 or pg11 or epas10 or epas12

  • USER is yum repo user name

  • PASS is yum repo user password.

  • EPASDBUSER is DB username for EDB postgres(leave blank if using community postgres)

  • PGDBUSER is DB username for community postgres(leave blank if using EDB postgres)

  • ip1 is internal IP of master server

  • ip2 is internal IP of slave1

  • ip3 is internal IP of slave2

  • REPLICATION_USER_PASSWORD is replication user/role password.

  • DBPASSWORD is DB password.

  • REPLICATION_TYPE synchronous or asynchronous

  • DB_ENGINE provide options like pg10 or pg11 or epas10 or epas12

  • MASTER is master server public IP

  • SLAVE1 is slave1 server public IP

  • SLAVE2 is slave2 server public IP

  • EFM_USER_PASSWORD is password for EFM role.

  • NOTIFICATION_EMAIL is email address for EFM notification.

  • NEWSLAVE is public IP of new server

  • selfip is internal IP of new server.

  • NOTIFICATION_EMAIL is email address for EFM notification.

  • DBPASSWORD is DB password

  • PEM_WEB_PASSWORD PEM server DB password.

  • PEM_IP is IP of PEM server.

Clone this wiki locally