Skip to content

Latest commit

 

History

History
829 lines (620 loc) · 32 KB

README.md

File metadata and controls

829 lines (620 loc) · 32 KB

AKS Cluster, Cosmos DB, and ACR using Bicep

Overview

This repository explains on how to use modular approach for Infrastructure as Code to provision a AKS cluster and few related resources.

The Bicep modules in the repository are designed keeping baseline architecture in mind. You can start using these modules as is or modify to suit the needs.

The Bicep modules will provision the following Azure Resources under subscription scope.

  1. A Resource Group
  2. A Managed Identity
  3. An Azure Container Registry for storing images
  4. A VNet required for configuring the AKS (optional)
  5. An AKS Cluster
  6. A Cosmos DB SQL API Account along with a Database, Container, and SQL Role to manage RBAC
  7. A Log Analytics Workspace (optional)

Architecture

Architecture Diagram

Securing the Cosmos DB account

You can configure the Azure Cosmos DB account to:

  1. Allow access only from a specific subnet of a virtual network (VNET) or make it accessible from any source.
  2. Authorize request accompanied by a valid authorization token or restrict access using RBAC and Managed Identity.

For simplicity we have not implemented them in this sample. Consider using the following best practices to enhance security of the Azure Cosmos DB account in Production applications.

  1. Limits access to the subnet by configuring a virtual network service endpoint.
  2. Set disableLocalAuth = true in the databaseAccount resource to enforce RBAC as the only authentication method.

Refer to the comments in Bicep\modules\cosmos\cosmos.bicep, and Bicep\modules\vnet\vnet.bicep files and edit these files as required to apply the above mentioned restrictions.

Deploy infrastructure with Bicep

1. Clone the repository

Clone the repository and move to Bicep folder

cd Bicep

2. Login to your Azure Account

az login

az account set -s <Subscription ID>

3. Initialize Parameters

Create a param.json file by using the following JSON, replace the {Resource Group Name}, {Cosmos DB Account Name}, and {ACR Name} placeholders with your own values for Resource Group Name, Cosmos DB Account Name, and Azure Container Registry instance Name. Refer to Naming rules and restrictions for Azure resources.

{
  "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentParameters.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "rgName": {
      "value": "{Resource Group Name}"
    },
    "cosmosName" :{
      "value": "{Cosmos DB Account Name}"
    },
    "acrName" :{
      "value": "{ACR Name}"
    },
    "throughput" :{
        "value": 11000
    }
  }
}

Caution The default settings provision an autoscale throughput of 11,000 RU/s to the OrderContainer Cosmos DB container. This sets the number of its physical partitions to 2.

4. Run Deployment

Run the following script to create the deployment

deploymentName='{Deployment Name}'  # Deployment Name
location='{Location}' # Location for deploying the resources

az deployment sub create --name $deploymentName --location $location --template-file main.bicep --parameters @param.json

Deployment Started

The deployment could take about 10 mins. Once provisioning is completed you should see a JSON output with Succeeded as provisioning state.

Deployment Success

You can also see the deployment status in the Resource Group

Deployment Status inside RG

6. Sign in to AKS CLuster

Use the below command to sign in to your AKS cluster. This command also downloads and configures the kubectl client certificate on your environment.

aksName=$(az deployment sub show --name $deploymentName --query 'properties.outputs.aksName.value' -o tsv)
rgName=$(az deployment sub show --name $deploymentName --query 'properties.outputs.resourceGroup.value' -o tsv)
az aks get-credentials -n $aksName -g $rgName

7. Deploy KEDA and External Scaler

Use the below command to add and update Helm chart repo.

helm repo add kedacore https://kedacore.github.io/charts
helm repo update

Use the below command to install KEDA Helm chart (or follow one of the other installation methods on KEDA documentation).

helm install keda kedacore/keda --namespace keda --create-namespace

Alternatively, You can use the mangaged AKS addon for Keda - install the AKS KEDA add-on with Azure CLI:

Register the AKS-KedaPreview feature flag by using the az feature register command, as shown in the following example:

az feature register --namespace "Microsoft.ContainerService" --name "AKS-KedaPreview"

It takes a few minutes for the status to show Registered. Verify the registration status by using the az feature show command:

az feature show --namespace "Microsoft.ContainerService" --name "AKS-KedaPreview"

When the status reflects Registered, refresh the registration of the Microsoft.ContainerService resource provider by using the az provider register command:

az provider register --namespace Microsoft.ContainerService

To install the KEDA add-on, use --enable-keda when creating or updating a cluster.

az aks update \
  --resource-group myResourceGroup \
  --name myAKSCluster \
  --enable-keda

Use the below command to install Azure Cosmos DB external scaler Helm chart.

helm install external-scaler-azure-cosmos-db kedacore/external-scaler-azure-cosmos-db --namespace keda --create-namespace

For more information refer to Deploying KEDA documentation page to deploy KEDA on your Kubernetes cluster.

8. Enable Azure Monitor Managed Prometheus and Managed Grafana on AKS Cluster

Prerequisites:

  1. Register the AKS-PrometheusAddonPreview feature flag in the Azure Kubernetes clusters subscription with the following command in the Azure CLI:

    
    az feature register --namespace Microsoft.ContainerService --name AKS-PrometheusAddonPreview
    
  2. The aks-preview extension must be installed by using the command

    az extension add --name aks-preview
    

For more information on how to install a CLI extension, see Use and manage extensions with the Azure CLI.

Note The aks-preview version 0.5.138 or higher is required for this feature. Check the aks-preview version by using the az version command.

  1. Add the custom Grafana dashboard to your managed grafana instance by importing this json file - Build Demo Grafana dashboard.json from the src folder.
  2. To get Cosmos DB metrics into Grafana, you need to provide the Managed Grafana instance read access to the CosmosDB cluster. That will allow you to pin any metric to a Grafana dashboard that has the requried read access.

9. Install the Metrics add-on

Use an existing Azure Monitor workspace and link with an existing Grafana workspace.

This option creates a link between the Azure Monitor workspace and the Grafana workspace.


 az aks update --enable-azuremonitormetrics -n <cluster-name> -g <cluster-resource-group> --azure-monitor-workspace-resource-id <azure-monitor-workspace-name-resource-id> --grafana-resource-id  <grafana-workspace-name-resource-id>

Testing sample application locally on Docker

Note For simplicity, we will use the connection string method to connect locally (using docker) to Azure Cosmos DB. Once deployed to the AKS cluster the applications will use Managed Identity for Connection. Managed Identity is more secure as there is no risk of accidentally leaking the connection string.

  1. On your development machine clone the repo.

  2. Open command prompt or shell and change to the root directory of the cloned repo.

  3. Run the below commands to build the Docker container images for order-generator and order-processor applications.

    
    docker build --file .\src\Scaler.Demo\OrderGenerator\Dockerfile --force-rm --tag cosmosdb-order-generator .\src
    docker build --file .\src\Scaler.Demo\OrderProcessor\Dockerfile --force-rm --tag cosmosdb-order-processor .\src
    
  4. Start a new shell instance and run the order-processor application in a new container. You can put the same connection string in both places in the command below. Note that the sample applications are written to handle different Cosmos DB accounts for monitored and lease containers but having two different accounts is not a requirement.

    
    docker run --env CosmosDbConfig__Connection="<connection-string>" --env CosmosDbConfig__LeaseConnection="<connection-string>" --interactive --rm --tty cosmosdb-order-processor
    

    You should see the following result.

    
    2023-05-15 13:05:38 info: Keda.CosmosDb.Scaler.Demo.OrderProcessor.Worker[0]
          Started change feed processor instance Instance-b0001b3034c7
    2023-05-15 13:05:38 info: Microsoft.Hosting.Lifetime[0]
          Application started. Press Ctrl+C to shut down.
    2023-05-15 13:05:38 info: Microsoft.Hosting.Lifetime[0]
          Hosting environment: Production
    2023-05-15 13:05:38 info: Microsoft.Hosting.Lifetime[0]
      Content root path: /app
    

    The default application settings would share the same database between the monitored and lease containers. The order-processor application will then activate a change-feed processor to monitor and process new changes in the monitored container.

  5. Keep the order-processor application running. Start a second shell instance and run order-generator application to add fake orders to the OrderContainer Cosmos DB container.

    docker run --env CosmosDbConfig__Connection="<connection-string>" --interactive --tty --rm cosmosdb-order-generator true 10 false
    

    You should see the following result.

    Creating order 252d35a5-4064-455a-abbf-0ae55edf92d3 - 10 unit(s) of Hat for Kenya Greenfelder
    Creating order 179a3af0-388f-4e1e-aaf3-e688f4262366 - 8 unit(s) of Salad for Gabe Witting
    Creating order 20df0b3c-a8e7-4ed0-b815-d14769067b31 - 2 unit(s) of Ball for Lucio Reilly
    Creating order 1ac64394-15ac-4310-91d8-bbb35d646461 - 9 unit(s) of Ball for Wilmer Mohr
    Creating order 49fb0b75-1ad3-4406-9253-14cb3ff83a94 - 10 unit(s) of Pants for Madie Sawayn
    Creating order 14cc6901-cd49-4781-8fd1-29e230cea6cb - 6 unit(s) of Shirt for Miracle O'Connell
    Creating order 8048f8bc-1f39-44bb-bffb-2fa75d500b69 - 5 unit(s) of Cheese for Lew Lindgren
    Creating order 9e8330f2-9fe3-4b2c-9f3d-5c01b5a4a9fc - 5 unit(s) of Shoes for Alison O'Reilly
    Creating order 9214149f-654a-46ba-9c3e-7832e262061e - 6 unit(s) of Hat for Louvenia Wolf
    Creating order 6744bcc6-4c07-4539-9ea2-69780c163396 - 10 unit(s) of Pants for Vesta Nolan
    
  6. Go back to the first shell where the order-processor application is running. Check the console output and verify that the orders were processed. You should see the following result.

    2023-05-15 13:09:09 info: Keda.CosmosDb.Scaler.Demo.OrderProcessor.Worker[0]
          8 order(s) received
    2023-05-15 13:09:09 info: Keda.CosmosDb.Scaler.Demo.OrderProcessor.Worker[0]
          Processing order 057723ee-8334-4688-9b68-5c8c8a20a7d8 - 10 unit(s) of Pants bought by Vesta Nolan
    2023-05-15 13:09:10 info: Keda.CosmosDb.Scaler.Demo.OrderProcessor.Worker[0]
          2 order(s) received
    2023-05-15 13:09:10 info: Keda.CosmosDb.Scaler.Demo.OrderProcessor.Worker[0]
          Processing order 746bf418-2090-4018-80b2-602498c590fe - 10 unit(s) of Hat bought by Kenya Greenfelder
    2023-05-15 13:09:11 info: Keda.CosmosDb.Scaler.Demo.OrderProcessor.Worker[0]
          Order 057723ee-8334-4688-9b68-5c8c8a20a7d8 processed
    2023-05-15 13:09:11 info: Keda.CosmosDb.Scaler.Demo.OrderProcessor.Worker[0]
          Processing order a74e1fc0-ea13-46bc-88bf-01f057466ec6 - 10 unit(s) of Pants bought by Madie Sawayn
    2023-05-15 13:09:12 info: Keda.CosmosDb.Scaler.Demo.OrderProcessor.Worker[0]
          Order 746bf418-2090-4018-80b2-602498c590fe processed
    2023-05-15 13:09:12 info: Keda.CosmosDb.Scaler.Demo.OrderProcessor.Worker[0]
          Processing order 5a40f671-bdda-40b1-9391-78e7a65beabb - 6 unit(s) of Hat bought by Louvenia Wolf
    2023-05-15 13:09:13 info: Keda.CosmosDb.Scaler.Demo.OrderProcessor.Worker[0]
          Order a74e1fc0-ea13-46bc-88bf-01f057466ec6 processed
    2023-05-15 13:09:13 info: Keda.CosmosDb.Scaler.Demo.OrderProcessor.Worker[0]
          Processing order 56743eec-a932-4e6f-bc6a-17eff7febf0b - 5 unit(s) of Shoes bought by Alison O'Reilly
    2023-05-15 13:09:14 info: Keda.CosmosDb.Scaler.Demo.OrderProcessor.Worker[0]
          Order 5a40f671-bdda-40b1-9391-78e7a65beabb processed
    2023-05-15 13:09:15 info: Keda.CosmosDb.Scaler.Demo.OrderProcessor.Worker[0]
          Order 56743eec-a932-4e6f-bc6a-17eff7febf0b processed
    2023-05-15 13:09:15 info: Keda.CosmosDb.Scaler.Demo.OrderProcessor.Worker[0]
          Processing order 952a4fe2-23c5-4dbf-b70c-58a43d7aeb1f - 9 unit(s) of Ball bought by Wilmer Mohr
    2023-05-15 13:09:17 info: Keda.CosmosDb.Scaler.Demo.OrderProcessor.Worker[0]
          Order 952a4fe2-23c5-4dbf-b70c-58a43d7aeb1f processed
    2023-05-15 13:09:17 info: Keda.CosmosDb.Scaler.Demo.OrderProcessor.Worker[0]
          Processing order e678e3a8-a71e-42c9-9bcd-7480efb451b3 - 2 unit(s) of Ball bought by Lucio Reilly
    2023-05-15 13:09:19 info: Keda.CosmosDb.Scaler.Demo.OrderProcessor.Worker[0]
          Order e678e3a8-a71e-42c9-9bcd-7480efb451b3 processed
    2023-05-15 13:09:19 info: Keda.CosmosDb.Scaler.Demo.OrderProcessor.Worker[0]
          Processing order 1b66b42e-03bc-4889-ac4e-d906d4089885 - 6 unit(s) of Shirt bought by Miracle O'Connell
    2023-05-15 13:09:21 info: Keda.CosmosDb.Scaler.Demo.OrderProcessor.Worker[0]
          Order 1b66b42e-03bc-4889-ac4e-d906d4089885 processed
    2023-05-15 13:09:21 info: Keda.CosmosDb.Scaler.Demo.OrderProcessor.Worker[0]
          Processing order cff54c61-e60b-472b-a5bd-fa6ff9581354 - 8 unit(s) of Salad bought by Gabe Witting
    2023-05-15 13:09:23 info: Keda.CosmosDb.Scaler.Demo.OrderProcessor.Worker[0]
          Order cff54c61-e60b-472b-a5bd-fa6ff9581354 processed
    2023-05-15 13:09:23 info: Keda.CosmosDb.Scaler.Demo.OrderProcessor.Worker[0]
          Processing order 6a00421d-3a63-4261-95e8-0014998dbe91 - 5 unit(s) of Cheese bought by Lew Lindgren
    2023-05-15 13:09:25 info: Keda.CosmosDb.Scaler.Demo.OrderProcessor.Worker[0]
          Order 6a00421d-3a63-4261-95e8-0014998dbe91 processed
    
  7. Stop order-processor container from the first shell by pressing Ctrl+ C.

Deploying KEDA and external scaler to cluster

Authenticate to Azure and Set Local Environment Variables

  1. Open command prompt or shell and change to the root directory of the cloned repo.

    az login
    
    az account set -s <Subscription ID>
    
  2. Find the Bicep deployment and use the outputs to set local environment variables

  3. Open command prompt or shell and change to the root directory of the cloned repo.

    az deployment sub list --query '[].name'
    acrName=$(az deployment sub show --name $deploymentName --query 'properties.outputs.acrName.value' -o tsv)
    cosmosName=$(az deployment sub show --name $deploymentName --query 'properties.outputs.cosmosName.value' -o tsv)
    clientId=$(az deployment sub show --name $deploymentName --query 'properties.outputs.workloadIdentity.value' -o tsv)
    

Build and Publish the External Scaler Container Image

Building locally and publish to Azure Container Registry

  1. Build and push the scaler container image to Azure Container Registry.

    az acr login --name $acrName
    docker build --file .\src\Scaler\Dockerfile --force-rm --tag $acrName.azurecr.io/cosmosdb/scaler .\src
    docker push $acrName.azurecr.io/cosmosdb/scaler
    

Building and Publishing with Azure Container Registry

  1. Use the az CLI to submit the build context to Azure Container Registry to be built

    az acr build --registry $acrName -f ./src/Scaler/Dockerfile -t cosmosdb/scaler:latest ./src
    

Create the service account for AKS Workload Identity

  1. Get the Client ID for the Managed Identity that the services will use to access CosmosDB.

    echo "Workload IdentityclientId: $clientId"
    
  2. Using the following YAML template, create a 'service_account.yaml'

    # Create namespace for the service account and services
    apiVersion: v1
    kind: Namespace
    metadata:
      creationTimestamp: null
      name: cosmosdb-order-processor
    
    ---
    # Create service account for use with workload identity
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      annotations:
        azure.workload.identity/client-id: { AKS Managed Identity Client ID}
      name: workload-identity-sa
      namespace: cosmosdb-order-processor
    
  3. Apply 'service_account' deployment YAML

    kubectl apply -f service_account.yaml
    

    You should see the following results.

    namespace/cosmosdb-order-processor created
    serviceaccount/workload-identity-sa created
    

Deploy the scaler to AKS

  1. Retrieve the ACR Name to update the deployment YAML.

    echo "ACR Name: $acrName"
    
  2. Using the following YAML template create a 'scaler_deploy.yaml'. Make sure to update your own values for {ACR Name} placeholder.

    # Deploy KEDA external scaler for Azure Cosmos DB.
    
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: cosmosdb-scaler
      namespace: cosmosdb-order-processor
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: cosmosdb-scaler
      template:
        metadata:
          labels:
            azure.workload.identity/use: "true"
            app: cosmosdb-scaler
        spec:
          serviceAccountName: workload-identity-sa
          containers:
            - image: {ACR Name}.azurecr.io/cosmosdb/scaler:latest"   # update as per your environment, example myacrname.azurecr.io/cosmosdb/scaler:latest. Do NOT add https:// in ACR Name
              imagePullPolicy: Always
              name: cosmosdb-scaler
              ports:
                - containerPort: 4050
    
    ---
    # Assign hostname to the scaler application.
    
    apiVersion: v1
    kind: Service
    metadata:
      name: cosmosdb-scaler
      namespace: cosmosdb-order-processor
    spec:
      ports:
        - port: 4050
          targetPort: 4050
      selector:
        app: cosmosdb-scaler
    
  3. Apply 'scaler_deploy' deployment YAML

    kubectl apply -f scaler_deploy.yaml
    

    You should see the following result.

    deployment.apps/cosmosdb-scaler created
    service/cosmosdb-scaler created
    

Deploying the order processor to cluster

Build and Publish the Order Processor Container Image

Building locally and publish to Azure Container Registry

  1. Build and push the cosmosdb-order-processor container image to Azure Container Registry.

    docker build --file ./src/Scaler.Demo/OrderProcessor/Dockerfile --force-rm --tag $acrName.azurecr.io/cosmosdb/order-processor ./src
    docker push $acrName.azurecr.io/cosmosdb/order-processor
    

Building and Publishing with Azure Container Registry

  1. Use the az CLI to submit the build context to Azure Container Registry to be built

    az acr build --registry $acrName -f ./src/Scaler.Demo/OrderProcessor/Dockerfile -t cosmosdb/order-processor:latest ./src
    

Deploy the OrderProcessor Service to AKS

  1. Retrieve the ACR Name and Cosmos DB Account Name to update the deployment YAML.

    echo "ACR Name: $acrName"
    echo "Cosmos DB Account Name: $cosmosName"
    
  2. Using the following YAML template create a 'orderprocessor_deploy.yaml'. Make sure to update your own values for {ACR Name}, {Cosmos DB Account Name} placeholders.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: cosmosdb-order-processor
      namespace: cosmosdb-order-processor
      labels:
        app: cosmosdb-order-processor
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: cosmosdb-order-processor
      template:
        metadata:
          labels:
            app: cosmosdb-order-processor
            azure.workload.identity/use: "true"
        spec:
          serviceAccountName: workload-identity-sa
          containers:
          - name: mycontainer
            image: {ACR Name}.azurecr.io/cosmosdb/order-processor:latest   # update as per your environment, example myacrname.azurecr.io/cosmosdb/order-processor:latest. Do NOT add https:// in ACR Name
            imagePullPolicy: Always
            env:
              - name: CosmosDbConfig__Endpoint
                value: https://{Cosmos DB Account Name}.documents.azure.com:443/  # update as per your environment
              - name: CosmosDbConfig__LeaseEndpoint
                value: https://{Cosmos DB Account Name}.documents.azure.com:443/ # update as per your environment
  3. Apply 'orderprocessor_deploy.yml' deployment YAML

    kubectl apply -f orderprocessor_deploy.yml
    
  4. Ensure that the order-processor application is running correctly on the cluster by checking application logs.

    kubectl get pods --namespace cosmosdb-order-processor
    

    You should see the following result.

    NAME                                        READY   STATUS    RESTARTS   AGE
    cosmosdb-order-processor-855f54dcd4-4mvmt   1/1     Running   0          5s
    cosmosdb-scaler-7d6fdc84b7-hgxmg            1/1     Running   0          15m
    

    Check the cosmosdb-order-processor logs

    kubectl logs cosmosdb-order-processor-855f54dcd4-4mvmt --namespace cosmosdb-order-processor
    

    You should see the following result.

    2023-05-15 15:29:29 info: Keda.CosmosDb.Scaler.Demo.OrderProcessor.Worker[0]
          Started change feed processor instance Instance-cosmosdb-order-processor-855f54dcd4-4mvmt
    2023-05-15 15:29:29 info: Microsoft.Hosting.Lifetime[0]
          Application started. Press Ctrl+C to shut down.
    2023-05-15 15:29:29 info: Microsoft.Hosting.Lifetime[0]
          Hosting environment: Production
    2023-05-15 15:29:29 info: Microsoft.Hosting.Lifetime[0]
          Content root path: /app
    

Deploy the Scaled Object to AKS

  1. Retrieve the Cosmos DB Account Name to update the deployment YAML.

    echo "Cosmos DB Account Name: $cosmosName"
    
  2. Using the following YAML template create a 'deploy-scaledobject.yaml'. Make sure to update your own values for {Cosmos DB Account Name} placeholder.

    # Create KEDA scaled object to scale order processor application.
    
    apiVersion: keda.sh/v1alpha1
    kind: ScaledObject
    metadata:
      name: cosmosdb-order-processor-scaledobject
      namespace: cosmosdb-order-processor
    spec:
      pollingInterval: 20
      scaleTargetRef:
        name: cosmosdb-order-processor
      triggers:
        - type: external
          metadata:
            scalerAddress: cosmosdb-scaler.cosmosdb-order-processor:4050
            endpoint: https://{Cosmos DB Account Name}.documents.azure.com:443/ # update as per your environment
            databaseId: StoreDatabase
            containerId: OrderContainer
            LeaseEndpoint: https://{Cosmos DB Account Name}.documents.azure.com:443/ # update as per your environment
            leaseDatabaseId: StoreDatabase
            leaseContainerId: OrderProcessorLeases
            processorName: OrderProcessor
    
  3. Apply 'deploy-scaledobject.yaml' deployment YAML

    kubectl apply -f deploy-scaledobject.yaml
    

Testing auto-scaling for sample application

  1. Wait for few minutes

  2. Verify that there is no order-processor pod running after the scaled object was created.

     kubectl get pods --namespace cosmosdb-order-processor
    

    You should see the following result.

    NAME                               READY   STATUS    RESTARTS   AGE
    cosmosdb-scaler-69694c8858-7xbpj   1/1     Running   0          14h
    

Deploying the single partition order generator to cluster

Building locally and publishing to Azure Container Registry

  1. Push the cosmosdb-order-generator container image to Azure Container Registry.Set the environment variables by replacing the {ACR Name} placeholders with your own values.

    docker build --file ./src/Scaler.Demo/OrderGenerator/Dockerfile --force-rm --tag $acrName.azurecr.io/cosmosdb/order-generator ./src
    docker push $acrName.azurecr.io/cosmosdb/order-generator
    
    

Building and Publishing with Azure Container Registry

  1. Use the az CLI to submit the build context to Azure Container Registry to be built

    az acr build --registry $acrName -f ./src/Scaler.Demo/OrderGenerator/Dockerfile -t cosmosdb/order-generator:latest ./src
    

Deploying the Single Partition OrderGenerator Service to AKS

  1. Retrieve the ACR Name and Cosmos DB Account Name to update the deployment YAML.

    echo "ACR Name: $acrName"
    echo "Cosmos DB Account Name: $cosmosName"
    
  2. Using the following YAML template create a 'ordergenerator_sp_deploy.yaml'. Make sure to update your own values for {ACR Name}, {Cosmos DB Account Name} placeholders.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: single-partition-order-generator
      namespace: cosmosdb-order-processor
      labels:
        app: single-partition-order-generator
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: single-partition-order-generator
      template:
        metadata:
          labels:
            app: single-partition-order-generator
            azure.workload.identity/use: "true"
        spec:
          serviceAccountName: workload-identity-sa
          containers:
          - name: mycontainer
            image: {ACR Name}.azurecr.io/cosmosdb/order-generator:latest   # update as per your environment, example myacrname.azurecr.io/cosmosdb/order-generator:latest. Do NOT add https:// in ACR Name
            imagePullPolicy: Always
            args: ["false", "true","25"]
            env:
              - name: CosmosDbConfig__Endpoint
                value: https://{Cosmos DB Account Name}.documents.azure.com:443/  # update as per your environment
    
  3. Apply 'ordergenerator_sp_deploy.yaml' deployment YAML

    kubectl apply -f ordergenerator_sp_deploy.yaml
    
  4. Verify that only one pod is created for the order-processor. It may take a few seconds for the pod to show up. You should see the following result.

    # kubectl get pods --namespace cosmosdb-order-processor
    

    You should see the following result.

    NAME                                                READY   STATUS    RESTARTS   AGE
    cosmosdb-order-processor-767d498685-cmf8l           1/1     Running   0          18s
    cosmosdb-scaler-69694c8858-7xbpj                    1/1     Running   0          14h
    single-partition-order-generator-75d88b4846-s2v42   1/1     Running   0          30s
    

Deploying the multi partition order generator to cluster

  1. Retrieve the ACR Name and Cosmos DB Account Name to update the deployment YAML.

    echo "ACR Name: $acrName"
    echo "Cosmos DB Account Name: $cosmosName"
    

Now, add more orders to the Cosmos DB container but this time across multiple partitions.

  1. Using the following YAML template create a 'ordergenerator_mp_deploy.yaml'. Make sure to update your own values for {ACR Name}, {Cosmos DB Account Name} placeholders.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: multi-partition-order-generator
      namespace: cosmosdb-order-processor
      labels:
        app: multi-partition-order-generator
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: multi-partition-order-generator
      template:
        metadata:
          labels:
            app: multi-partition-order-generator
            azure.workload.identity/use: "true"
        spec:
          serviceAccountName: workload-identity-sa
          containers:
          - name: mycontainer
            image: {ACR Name}.azurecr.io/cosmosdb/order-generator:latest   # update as per your environment, example myacrname.azurecr.io/cosmosdb/order-generator:latest. Do NOT add https:// in ACR Name
            imagePullPolicy: Always
            args: ["false", "false","25"]
            env:
              - name: CosmosDbConfig__Endpoint
                value: https://{Cosmos DB Account Name}.documents.azure.com:443/  # update as per your environment
    
  2. Apply 'ordergenerator_sp_deploy.yaml' deployment YAML

    kubectl apply -f ordergenerator_mp_deploy.yaml
    
  3. Verify that two pods are created for the order-processor.

    # kubectl get pods --namespace cosmosdb-order-processor
    

    You should see the following result.

     NAME                                                READY   STATUS    RESTARTS   AGE
    cosmosdb-order-processor-767d498685-cmf8l           1/1     Running   0          3m5s
    cosmosdb-order-processor-767d498685-t7fs5           1/1     Running   0          2s
    cosmosdb-scaler-69694c8858-7xbpj                    1/1     Running   0          14h
    multi-partition-order-generator-776bbbfff4-x2h6n    1/1     Running   0          6s
    single-partition-order-generator-75d88b4846-s2v42   1/1     Running   0          3m17s
    
  4. You can also verify that both order-processor pods are able to share the processing of orders.

    1. Check the first order processor pod logs.

      kubectl logs cosmosdb-order-processor-767d498685-cmf8l --tail=4
      

      You should see the following result.

      2021-09-03 12:57:41 info: Keda.CosmosDb.Scaler.Demo.OrderProcessor.Worker[0]
          Order 5ba7f503-0185-49f6-9fce-3da999464049 processed
      2021-09-03 12:57:41 info: Keda.CosmosDb.Scaler.Demo.OrderProcessor.Worker[0]
          Processing order ce1f05ad-08ff-4535-858f-3158de41971b - 8 unit(s) of Computer bought by Jaren Tremblay
      
    2. Check the other order processor pod logs.

      kubectl logs cosmosdb-order-processor-767d498685-t7fs5 --tail=4
      

      You should see the following result.

      2021-09-03 12:57:53 info: Keda.CosmosDb.Scaler.Demo.OrderProcessor.Worker[0]
          Order e881c998-1318-411e-8181-fa638335910e processed
      2021-09-03 12:57:53 info: Keda.CosmosDb.Scaler.Demo.OrderProcessor.Worker[0]
          Processing order ca17597f-7aa2-4b04-abd8-724139b2c370 - 1 unit(s) of Gloves bought by Donny Shanahan
      

Stop order generator to cluster

  1. Execute the below commands to delete the order generator pods

kubectl delete -f ordergenerator_sp_deploy.yaml
kubectl delete -f ordergenerator_mp_deploy.yaml
  1. Wait for 10-15 minutes. The pods will automatically scale down to 0.
# kubectl get pods --namespace cosmosdb-order-processor

You should see the following result.

NAME                                       READY   STATUS    RESTARTS   AGE
cosmosdb-scaler-64dd48678c-d6dqq           1/1     Running   0          35m

Cleanup

Use the below commands to delete the Resource Group and Deployment

az group delete -g $rgName -y
az deployment sub delete -n $deploymentName

Note You can find the cmd commands used for the Build Demo in the Demo Script file