Skip to content

Guide To Deploy AWS Apps

Miere edited this page Jan 11, 2018 · 1 revision

Guide to deploy Kikaha apps at Amazon

The objective of this guide is show basic features kikaha offers out-of-box which may be useful to make deployment and monitoring easier on Amazon.

Requirements

This guide is focused on 2.1.x version of Kikaha which at the date of this wrinting is still under beta state. Its release date is planed to June, 2017. Also this guide assumes you are familiar with Amazon Web Services and how services are authenticated on AWS.

For more information about AWS, please consult this guide.

AWS Cloud Modules Overview

Out-of-box Kikaha provides some features which integrates with AWS services:

  • Versatile configuration mechanism to retrieve credentials

  • Auto Join on AWS Application Load Balancer

  • Deploy artifacts to AWS S3

  • Deploy artifacts through AWS Code Deploy

  • Read configurations from AWS EC2 tags

  • Distribute Task Consumers with AWS SQS

  • Monitore your application throughput with AWS X-Ray

  • Create alarms from JVM metrics with CloudWatch

  • Deploy serverless application with AWS Lambda

  • Distribute Task Consumers with AWS SQS

TODO
  • Monitore your application throughput with AWS X-Ray
TODO
  • Deploy serverless application with AWS Lambda
TODO

Creating a IAM policy

In order to properly communicate with AWS services you have to create an IAM Policy that allows Kikaha to execute tasks on your behalf. For the sake of simplicity, and make this guide a little less verbose, we will not describe a whole IAM policy that may or not be used for each AWS resource Kikaha may use, but describe the set of "actions" required for the above-described features as needed.

Versatile configuration mechanism to retrieve credentials

Maybe the most important think one should care about when is designing an application that interacts with AWS services is its credential model. By default, on Kikaha applications, any managed class are able to inject com.amazonaws.auth.AWSCredentials definitions (or AmazonWebServiceConfiguration, which basically contains com.amazonaws.auth.AWSCredentialsProvider and com.amazonaws.regions.Regions). The bellow sample code shows how you can do this.

public class MyManagedClass {

  @Inject AWSCredentials credentials;
  @Inject AmazonWebServiceConfiguration awsConfiguration;
}

Using IAM Roles for Amazon EC2 to setup credentials and region

As stated by the IAM Roles for Amazon EC2 guide, attach a IAM Role to your EC2 instances is a good strategy for managing credentials for your applications. The default Kikaha configuration will respect the Default Credential Chain ensuring that you can provide credentials by Environment Variables, EC2 Roles, ECS container credentials and the default credential profiles file (usually located at {USER HOME DIR}/.aws/credentials file).

Using application.yml to setup credentials and region

Kikaha also provide an alternative mechanism, just in case you want to configure keep your credential configuration along with other configuration entries at application.yml file for any reason. Basically, all you need to do is define kikaha.cloud.aws.iam.AmazonCredentialsFactory$Yml as credential factory for AWS services, as shown bellow.

# application.yml
server:
  aws:
    credentials-factory: kikaha.cloud.aws.iam.AmazonCredentialsFactory$Yml
    iam-policy:
      access-key-id: ACCESS_KEY
      secret-access-key: SECRET_ACCESS_KEY

As you can see, you would be able to define your credentials on the alias called default at the service named iam-policy. Under the hood, this Yml-based configuration follow the convention server.aws. to provide information regarding an specific module, in this case, iam-policy.

Configuring the Region your AWS clients

It is also possible to configure which region an AWS resource should use. Basically, you can set the attribute _server.aws.<service>._region, where service represents the AWS resource you want to define. Bellow, is the default yml configuration Kikaha uses.

server:
  aws:
    # fall back configuration for every AWS service
    default:
      region: "us-east-1"

    # IAM service
    iam-policy:
        # access-key-id: ACCESS_KEY
        # secret-access-key: SECRET_ACCESS_KEY

    # EC2 service
    ec2:
      #region: "us-east-1"

    # ELB service
    elb:
      #region: "us-east-1"

    # SQS service
    sqs:
      #region: "us-east-1"

    # CloudWatch service
    cloudwatch:
      #region: "us-east-1"

    # X-Ray service does not provide a way to setup the region.
    # During the tests, it seems to send to the current region your appliction is actually running.
    x-ray:

Deploy artifacts to AWS S3

You can easily send your project artifact to a S3 bucket. Here are the steps you need to do:

  1. Create a S3 Bucket. On this guide, we will assume you have named your bucket as 'deployment-bucket'.

  2. Create an IAM Policy (or update an already created one) and make sure that grants the following actions:

"s3:PutObject"
  1. Copy and paste the following properties on you project definition file (pom.xml).
<config.plugins.aws.s3.enabled>true</config.plugins.aws.s3.enabled>
<config.plugins.aws.s3.region>us-east-1</config.plugins.aws.s3.region>
<config.plugins.aws.s3.bucket>deployment-bucket</config.plugins.aws.s3.bucket>
<config.plugins.jar.enabled>true</config.plugins.jar.enabled>

Notes about S3 plugin:

  • config.plugins.aws.s3.enabled: it turns on the S3 deployment plugin
  • config.plugins.aws.s3.region: is region your bucket is actually placed
  • config.plugins.aws.s3.bucket: the bucket name the plugin should send the artifact

Notes about the Jar plugin:

  • config.plugins.jar.enabled: if true, will generate a runnable jar from your project.
  1. Run 'kikaha deploy' or 'mvn clean deploy' in order to send your just generated artifact to the S3 bucket.

Deploy artifacts through AWS Code Deploy

You can also notify your AWS CodeDeploy instance to deploy artifacts on your servers.

  1. Follow the guide above and configure your project to deploy on S3. AWS CodeDeploy can retrieve artifacts from S3 buckets and deploy them on your EC2 instances.

  2. Follow only the two following steps from this AWS CodeDeploy Guide. We will assume you created an Application Deployment named 'microservice1.example.com' and a deployment group named 'production':

    • Configure your instances
    • Creating Your Application and Deployment Groups
  3. Once you've configured the S3 deployment, include the following properties on your pom.xml file.

<config.plugins.aws.s3.useCodeDeploy>true</config.plugins.aws.s3.useCodeDeploy>
<config.plugins.aws.s3.codeDeployApplicationName>microservice1.example.com</config.plugins.aws.s3.codeDeployApplicationName>
<config.plugins.aws.s3.codeDeployDeploymentGroupName>production</config.plugins.aws.s3.codeDeployDeploymentGroupName>

Notes about the CodeDeploy plugin:

  • config.plugins.aws.s3.useCodeDeploy: if true, will notify your Application Deployment that a new revision is ready
  • config.plugins.aws.s3.codeDeployApplicationName: the Application Deployment
  • config.plugins.aws.s3.codeDeployDeploymentGroupName: the deployment group you defined for this app
  1. If everything went right, you should be able to run 'kikaha deploy' or 'mvn clean deploy' and see your EC2 instances being automatically updated.

Auto Join on AWS Application Load Balancer

TODO
Clone this wiki locally